Test Report: Docker_Linux_crio_arm64 21830

                    
                      3aa0d58a4eff13dd9d5f058e659508fb4ffd2206:2025-11-01:42156
                    
                

Test fail (38/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.3
35 TestAddons/parallel/Registry 16.39
36 TestAddons/parallel/RegistryCreds 0.51
37 TestAddons/parallel/Ingress 144.51
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 5.45
41 TestAddons/parallel/CSI 47.25
42 TestAddons/parallel/Headlamp 3.63
43 TestAddons/parallel/CloudSpanner 5.34
44 TestAddons/parallel/LocalPath 9.46
45 TestAddons/parallel/NvidiaDevicePlugin 5.32
46 TestAddons/parallel/Yakd 6.27
97 TestFunctional/parallel/ServiceCmdConnect 603.56
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.91
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
135 TestFunctional/parallel/ServiceCmd/Format 0.48
136 TestFunctional/parallel/ServiceCmd/URL 0.52
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.1
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.27
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
171 TestMultiControlPlane/serial/RestartSecondaryNode 506.07
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.8
191 TestJSONOutput/pause/Command 1.82
197 TestJSONOutput/unpause/Command 1.67
281 TestPause/serial/Pause 8.37
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.55
303 TestStartStop/group/old-k8s-version/serial/Pause 6.26
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.61
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.35
321 TestStartStop/group/no-preload/serial/Pause 7.95
327 TestStartStop/group/embed-certs/serial/Pause 7.78
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.63
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.61
341 TestStartStop/group/newest-cni/serial/Pause 6.12
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.21
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable volcano --alsologtostderr -v=1: exit status 11 (299.631422ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:16.841934  541492 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:16.842610  541492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:16.842626  541492 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:16.842632  541492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:16.842967  541492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:51:16.843301  541492 mustload.go:66] Loading cluster: addons-780397
	I1101 10:51:16.843712  541492 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:16.843733  541492 addons.go:607] checking whether the cluster is paused
	I1101 10:51:16.843874  541492 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:16.843891  541492 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:51:16.844445  541492 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:51:16.866346  541492 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:16.866403  541492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:51:16.884597  541492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:51:16.992247  541492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:16.992337  541492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:17.023219  541492 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:51:17.023247  541492 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:51:17.023253  541492 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:51:17.023257  541492 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:51:17.023260  541492 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:51:17.023264  541492 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:51:17.023267  541492 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:51:17.023270  541492 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:51:17.023292  541492 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:51:17.023303  541492 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:51:17.023306  541492 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:51:17.023310  541492 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:51:17.023313  541492 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:51:17.023317  541492 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:51:17.023320  541492 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:51:17.023325  541492 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:51:17.023332  541492 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:51:17.023336  541492 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:51:17.023339  541492 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:51:17.023342  541492 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:51:17.023348  541492 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:51:17.023351  541492 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:51:17.023372  541492 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:51:17.023380  541492 cri.go:89] found id: ""
	I1101 10:51:17.023448  541492 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:17.038921  541492 out.go:203] 
	W1101 10:51:17.041993  541492 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:17.042070  541492 out.go:285] * 
	* 
	W1101 10:51:17.049389  541492 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:17.052510  541492 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.885192ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.014509306s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003635365s
addons_test.go:392: (dbg) Run:  kubectl --context addons-780397 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-780397 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-780397 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.722280285s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable registry --alsologtostderr -v=1: exit status 11 (324.738823ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:44.461899  542053 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:44.464965  542053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:44.465048  542053 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:44.465070  542053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:44.465370  542053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:51:44.465810  542053 mustload.go:66] Loading cluster: addons-780397
	I1101 10:51:44.466228  542053 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:44.466272  542053 addons.go:607] checking whether the cluster is paused
	I1101 10:51:44.466400  542053 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:44.466438  542053 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:51:44.467030  542053 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:51:44.491154  542053 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:44.491235  542053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:51:44.508648  542053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:51:44.616695  542053 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:44.616807  542053 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:44.649686  542053 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:51:44.649749  542053 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:51:44.649755  542053 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:51:44.649758  542053 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:51:44.649762  542053 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:51:44.649767  542053 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:51:44.649770  542053 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:51:44.649773  542053 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:51:44.649777  542053 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:51:44.649784  542053 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:51:44.649788  542053 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:51:44.649791  542053 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:51:44.649794  542053 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:51:44.649797  542053 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:51:44.649800  542053 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:51:44.649806  542053 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:51:44.649809  542053 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:51:44.649815  542053 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:51:44.649822  542053 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:51:44.649825  542053 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:51:44.649830  542053 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:51:44.649833  542053 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:51:44.649836  542053 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:51:44.649839  542053 cri.go:89] found id: ""
	I1101 10:51:44.649891  542053 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:44.665023  542053 out.go:203] 
	W1101 10:51:44.667813  542053 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:44.667841  542053 out.go:285] * 
	* 
	W1101 10:51:44.674924  542053 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:44.677790  542053 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.39s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.653949ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-780397
addons_test.go:332: (dbg) Run:  kubectl --context addons-780397 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (272.394264ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:52:37.563009  544111 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:52:37.563804  544111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:37.563846  544111 out.go:374] Setting ErrFile to fd 2...
	I1101 10:52:37.563873  544111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:37.564171  544111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:52:37.564511  544111 mustload.go:66] Loading cluster: addons-780397
	I1101 10:52:37.564979  544111 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:37.565024  544111 addons.go:607] checking whether the cluster is paused
	I1101 10:52:37.565165  544111 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:37.565203  544111 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:52:37.565769  544111 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:52:37.585465  544111 ssh_runner.go:195] Run: systemctl --version
	I1101 10:52:37.585516  544111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:52:37.611125  544111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:52:37.716446  544111 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:37.716556  544111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:37.749320  544111 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:52:37.749353  544111 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:52:37.749359  544111 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:52:37.749363  544111 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:52:37.749367  544111 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:52:37.749371  544111 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:52:37.749375  544111 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:52:37.749378  544111 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:52:37.749381  544111 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:52:37.749388  544111 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:52:37.749391  544111 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:52:37.749395  544111 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:52:37.749398  544111 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:52:37.749402  544111 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:52:37.749405  544111 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:52:37.749410  544111 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:52:37.749419  544111 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:52:37.749423  544111 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:52:37.749426  544111 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:52:37.749429  544111 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:52:37.749434  544111 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:52:37.749436  544111 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:52:37.749439  544111 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:52:37.749442  544111 cri.go:89] found id: ""
	I1101 10:52:37.749500  544111 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:52:37.765205  544111 out.go:203] 
	W1101 10:52:37.768170  544111 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:52:37.768201  544111 out.go:285] * 
	* 
	W1101 10:52:37.775385  544111 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:52:37.778340  544111 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-780397 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-780397 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-780397 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b21197be-d41b-4706-9709-b626477a8a83] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [b21197be-d41b-4706-9709-b626477a8a83] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003441941s
I1101 10:52:14.323195  534720 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.39758545s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-780397 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-780397
helpers_test.go:243: (dbg) docker inspect addons-780397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6",
	        "Created": "2025-11-01T10:48:56.100696119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 535884,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:48:56.159996171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6/hosts",
	        "LogPath": "/var/lib/docker/containers/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6-json.log",
	        "Name": "/addons-780397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-780397:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-780397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6",
	                "LowerDir": "/var/lib/docker/overlay2/fe4ea45cdd89f2c9d1f2cb2b8be871ff8ab2c01c23869905f60e0060bf98a7f9-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe4ea45cdd89f2c9d1f2cb2b8be871ff8ab2c01c23869905f60e0060bf98a7f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe4ea45cdd89f2c9d1f2cb2b8be871ff8ab2c01c23869905f60e0060bf98a7f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe4ea45cdd89f2c9d1f2cb2b8be871ff8ab2c01c23869905f60e0060bf98a7f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-780397",
	                "Source": "/var/lib/docker/volumes/addons-780397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-780397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-780397",
	                "name.minikube.sigs.k8s.io": "addons-780397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "822dd58c1fe6787728cc98f29ab3db06ea50e99d9ff68359a4651e97910ec3c0",
	            "SandboxKey": "/var/run/docker/netns/822dd58c1fe6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-780397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:85:b9:0e:b3:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5dad9c17d41b068f1874aba9bc4d83a7bdafd82a350976f89ac87070117f67d2",
	                    "EndpointID": "55663942b33fe339acf47e76c9e79524da5bc8d3e830819463b546ccaf0c44dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-780397",
	                        "7d2662ca9bdd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-780397 -n addons-780397
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-780397 logs -n 25: (1.592127803s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-524809                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-524809 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ --download-only -p binary-mirror-212672 --alsologtostderr --binary-mirror http://127.0.0.1:46695 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-212672   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ delete  │ -p binary-mirror-212672                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-212672   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ addons  │ enable dashboard -p addons-780397                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ addons  │ disable dashboard -p addons-780397                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ start   │ -p addons-780397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:51 UTC │
	│ addons  │ addons-780397 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ ip      │ addons-780397 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ addons  │ addons-780397 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ ssh     │ addons-780397 ssh cat /opt/local-path-provisioner/pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ addons  │ addons-780397 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ enable headlamp -p addons-780397 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ ssh     │ addons-780397 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ addons  │ addons-780397 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ addons  │ addons-780397 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-780397                                                                                                                                                                                                                                                                                                                                                                                           │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:52 UTC │
	│ addons  │ addons-780397 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │                     │
	│ ip      │ addons-780397 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:54 UTC │ 01 Nov 25 10:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:48:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:48:30.104953  535488 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:48:30.105179  535488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:30.105213  535488 out.go:374] Setting ErrFile to fd 2...
	I1101 10:48:30.105235  535488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:30.105560  535488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:48:30.106168  535488 out.go:368] Setting JSON to false
	I1101 10:48:30.107139  535488 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9059,"bootTime":1761985051,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:48:30.107259  535488 start.go:143] virtualization:  
	I1101 10:48:30.112769  535488 out.go:179] * [addons-780397] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:48:30.116013  535488 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:48:30.116054  535488 notify.go:221] Checking for updates...
	I1101 10:48:30.119120  535488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:48:30.122178  535488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 10:48:30.125111  535488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 10:48:30.128024  535488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:48:30.131091  535488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:48:30.134401  535488 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:48:30.159243  535488 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:48:30.159374  535488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:30.224418  535488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 10:48:30.210139372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:30.224520  535488 docker.go:319] overlay module found
	I1101 10:48:30.227652  535488 out.go:179] * Using the docker driver based on user configuration
	I1101 10:48:30.230407  535488 start.go:309] selected driver: docker
	I1101 10:48:30.230428  535488 start.go:930] validating driver "docker" against <nil>
	I1101 10:48:30.230443  535488 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:48:30.231170  535488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:30.291205  535488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 10:48:30.281163164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:30.291363  535488 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:48:30.291604  535488 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:48:30.294465  535488 out.go:179] * Using Docker driver with root privileges
	I1101 10:48:30.297391  535488 cni.go:84] Creating CNI manager for ""
	I1101 10:48:30.297468  535488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:48:30.297482  535488 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:48:30.297573  535488 start.go:353] cluster config:
	{Name:addons-780397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 10:48:30.302497  535488 out.go:179] * Starting "addons-780397" primary control-plane node in "addons-780397" cluster
	I1101 10:48:30.305384  535488 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:48:30.308278  535488 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:48:30.311065  535488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:48:30.311133  535488 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:48:30.311148  535488 cache.go:59] Caching tarball of preloaded images
	I1101 10:48:30.311151  535488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:48:30.311232  535488 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:48:30.311242  535488 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:48:30.311581  535488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/config.json ...
	I1101 10:48:30.311602  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/config.json: {Name:mkafaa477b09cf7e80b93a7e65a9a24fb797d1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:48:30.327051  535488 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 10:48:30.327188  535488 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 10:48:30.327211  535488 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 10:48:30.327216  535488 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 10:48:30.327224  535488 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 10:48:30.327229  535488 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 10:48:48.149412  535488 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 10:48:48.149467  535488 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:48:48.149497  535488 start.go:360] acquireMachinesLock for addons-780397: {Name:mk3b3a54a349679dc1852b86688785584ad3651f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:48:48.149631  535488 start.go:364] duration metric: took 108.474µs to acquireMachinesLock for "addons-780397"
	I1101 10:48:48.149663  535488 start.go:93] Provisioning new machine with config: &{Name:addons-780397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:48:48.149776  535488 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:48:48.153025  535488 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 10:48:48.153269  535488 start.go:159] libmachine.API.Create for "addons-780397" (driver="docker")
	I1101 10:48:48.153310  535488 client.go:173] LocalClient.Create starting
	I1101 10:48:48.153435  535488 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 10:48:48.953004  535488 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 10:48:49.242063  535488 cli_runner.go:164] Run: docker network inspect addons-780397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:48:49.257080  535488 cli_runner.go:211] docker network inspect addons-780397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:48:49.257172  535488 network_create.go:284] running [docker network inspect addons-780397] to gather additional debugging logs...
	I1101 10:48:49.257194  535488 cli_runner.go:164] Run: docker network inspect addons-780397
	W1101 10:48:49.272277  535488 cli_runner.go:211] docker network inspect addons-780397 returned with exit code 1
	I1101 10:48:49.272309  535488 network_create.go:287] error running [docker network inspect addons-780397]: docker network inspect addons-780397: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-780397 not found
	I1101 10:48:49.272337  535488 network_create.go:289] output of [docker network inspect addons-780397]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-780397 not found
	
	** /stderr **
	I1101 10:48:49.272453  535488 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:48:49.288518  535488 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0fcc0}
	I1101 10:48:49.288554  535488 network_create.go:124] attempt to create docker network addons-780397 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 10:48:49.288609  535488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-780397 addons-780397
	I1101 10:48:49.348075  535488 network_create.go:108] docker network addons-780397 192.168.49.0/24 created
	I1101 10:48:49.348106  535488 kic.go:121] calculated static IP "192.168.49.2" for the "addons-780397" container
	I1101 10:48:49.348189  535488 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:48:49.363299  535488 cli_runner.go:164] Run: docker volume create addons-780397 --label name.minikube.sigs.k8s.io=addons-780397 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:48:49.381328  535488 oci.go:103] Successfully created a docker volume addons-780397
	I1101 10:48:49.381420  535488 cli_runner.go:164] Run: docker run --rm --name addons-780397-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-780397 --entrypoint /usr/bin/test -v addons-780397:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:48:51.598637  535488 cli_runner.go:217] Completed: docker run --rm --name addons-780397-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-780397 --entrypoint /usr/bin/test -v addons-780397:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.217167035s)
	I1101 10:48:51.598679  535488 oci.go:107] Successfully prepared a docker volume addons-780397
	I1101 10:48:51.598709  535488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:48:51.598730  535488 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:48:51.598805  535488 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-780397:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:48:56.030427  535488 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-780397:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.431581104s)
	I1101 10:48:56.030458  535488 kic.go:203] duration metric: took 4.431724376s to extract preloaded images to volume ...
	W1101 10:48:56.030605  535488 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:48:56.030761  535488 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:48:56.086038  535488 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-780397 --name addons-780397 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-780397 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-780397 --network addons-780397 --ip 192.168.49.2 --volume addons-780397:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:48:56.381366  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Running}}
	I1101 10:48:56.400346  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:48:56.429618  535488 cli_runner.go:164] Run: docker exec addons-780397 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:48:56.483642  535488 oci.go:144] the created container "addons-780397" has a running status.
	I1101 10:48:56.483671  535488 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa...
	I1101 10:48:56.754426  535488 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:48:56.778268  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:48:56.797073  535488 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:48:56.797092  535488 kic_runner.go:114] Args: [docker exec --privileged addons-780397 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:48:56.857049  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:48:56.883734  535488 machine.go:94] provisionDockerMachine start ...
	I1101 10:48:56.883853  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:48:56.908190  535488 main.go:143] libmachine: Using SSH client type: native
	I1101 10:48:56.908817  535488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1101 10:48:56.908842  535488 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:48:56.909850  535488 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:49:00.123306  535488 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-780397
	
	I1101 10:49:00.123334  535488 ubuntu.go:182] provisioning hostname "addons-780397"
	I1101 10:49:00.123415  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:00.191347  535488 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:00.191669  535488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1101 10:49:00.191681  535488 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-780397 && echo "addons-780397" | sudo tee /etc/hostname
	I1101 10:49:00.446030  535488 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-780397
	
	I1101 10:49:00.446146  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:00.467417  535488 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:00.467748  535488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1101 10:49:00.467773  535488 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-780397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-780397/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-780397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:49:00.617951  535488 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:49:00.617978  535488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 10:49:00.618009  535488 ubuntu.go:190] setting up certificates
	I1101 10:49:00.618019  535488 provision.go:84] configureAuth start
	I1101 10:49:00.618081  535488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-780397
	I1101 10:49:00.635149  535488 provision.go:143] copyHostCerts
	I1101 10:49:00.635235  535488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 10:49:00.635390  535488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 10:49:00.635459  535488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 10:49:00.635519  535488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.addons-780397 san=[127.0.0.1 192.168.49.2 addons-780397 localhost minikube]
	I1101 10:49:02.244484  535488 provision.go:177] copyRemoteCerts
	I1101 10:49:02.244555  535488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:49:02.244597  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.267203  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:02.373321  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:49:02.390286  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 10:49:02.407276  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:49:02.424531  535488 provision.go:87] duration metric: took 1.806487246s to configureAuth
	I1101 10:49:02.424560  535488 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:49:02.424751  535488 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:02.424864  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.440977  535488 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:02.441278  535488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1101 10:49:02.441293  535488 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:49:02.694668  535488 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:49:02.694695  535488 machine.go:97] duration metric: took 5.810938498s to provisionDockerMachine
	I1101 10:49:02.694706  535488 client.go:176] duration metric: took 14.541378697s to LocalClient.Create
	I1101 10:49:02.694719  535488 start.go:167] duration metric: took 14.541451379s to libmachine.API.Create "addons-780397"
	I1101 10:49:02.694735  535488 start.go:293] postStartSetup for "addons-780397" (driver="docker")
	I1101 10:49:02.694750  535488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:49:02.694828  535488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:49:02.694886  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.713390  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:02.817608  535488 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:49:02.820911  535488 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:49:02.820941  535488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:49:02.820953  535488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 10:49:02.821025  535488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 10:49:02.821068  535488 start.go:296] duration metric: took 126.321528ms for postStartSetup
	I1101 10:49:02.821397  535488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-780397
	I1101 10:49:02.837310  535488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/config.json ...
	I1101 10:49:02.837603  535488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:49:02.837656  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.854875  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:02.954726  535488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:49:02.959195  535488 start.go:128] duration metric: took 14.809402115s to createHost
	I1101 10:49:02.959220  535488 start.go:83] releasing machines lock for "addons-780397", held for 14.809575073s
	I1101 10:49:02.959318  535488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-780397
	I1101 10:49:02.976039  535488 ssh_runner.go:195] Run: cat /version.json
	I1101 10:49:02.976107  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.976352  535488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:49:02.976420  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.997455  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:03.005941  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:03.196943  535488 ssh_runner.go:195] Run: systemctl --version
	I1101 10:49:03.203476  535488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:49:03.240522  535488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:49:03.245219  535488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:49:03.245296  535488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:49:03.274750  535488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:49:03.274776  535488 start.go:496] detecting cgroup driver to use...
	I1101 10:49:03.274837  535488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:49:03.274914  535488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:49:03.291254  535488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:49:03.304035  535488 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:49:03.304100  535488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:49:03.321926  535488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:49:03.340099  535488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:49:03.448361  535488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:49:03.586379  535488 docker.go:234] disabling docker service ...
	I1101 10:49:03.586444  535488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:49:03.608320  535488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:49:03.621198  535488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:49:03.738506  535488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:49:03.857277  535488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:49:03.870760  535488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:49:03.884936  535488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:49:03.885002  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.894371  535488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:49:03.894453  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.903526  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.912514  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.921531  535488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:49:03.929730  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.938684  535488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.951846  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.960418  535488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:49:03.967915  535488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:49:03.975579  535488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:04.086057  535488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:49:04.216573  535488 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:49:04.216723  535488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:49:04.220524  535488 start.go:564] Will wait 60s for crictl version
	I1101 10:49:04.220597  535488 ssh_runner.go:195] Run: which crictl
	I1101 10:49:04.224209  535488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:49:04.249441  535488 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:49:04.249565  535488 ssh_runner.go:195] Run: crio --version
	I1101 10:49:04.277861  535488 ssh_runner.go:195] Run: crio --version
	I1101 10:49:04.310631  535488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:49:04.313541  535488 cli_runner.go:164] Run: docker network inspect addons-780397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:49:04.329791  535488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 10:49:04.333639  535488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:49:04.343366  535488 kubeadm.go:884] updating cluster {Name:addons-780397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:49:04.343487  535488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:49:04.343546  535488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:49:04.377239  535488 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:49:04.377260  535488 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:49:04.377314  535488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:49:04.403005  535488 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:49:04.403031  535488 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:49:04.403040  535488 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 10:49:04.403128  535488 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-780397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:49:04.403213  535488 ssh_runner.go:195] Run: crio config
	I1101 10:49:04.482972  535488 cni.go:84] Creating CNI manager for ""
	I1101 10:49:04.482998  535488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:49:04.483017  535488 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:49:04.483063  535488 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-780397 NodeName:addons-780397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:49:04.483198  535488 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-780397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:49:04.483273  535488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:49:04.490891  535488 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:49:04.491009  535488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:49:04.498674  535488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 10:49:04.512026  535488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:49:04.525112  535488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1101 10:49:04.537775  535488 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:49:04.541252  535488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:49:04.550929  535488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:04.673791  535488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:49:04.689947  535488 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397 for IP: 192.168.49.2
	I1101 10:49:04.689980  535488 certs.go:195] generating shared ca certs ...
	I1101 10:49:04.690012  535488 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:04.690182  535488 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 10:49:04.855706  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt ...
	I1101 10:49:04.855735  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt: {Name:mkd8cc2887830a159b2b1c088105b8ccf386520b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:04.855964  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key ...
	I1101 10:49:04.855979  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key: {Name:mk207a6fa593d5625b07de77baa039bb8fc57bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:04.856070  535488 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 10:49:05.388339  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt ...
	I1101 10:49:05.388372  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt: {Name:mk0f59d993b941d17205757d41b370114a519a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.388567  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key ...
	I1101 10:49:05.388576  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key: {Name:mk4449e2883a1aab70403a8d895c70ff11b4b1c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.388644  535488 certs.go:257] generating profile certs ...
	I1101 10:49:05.388706  535488 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.key
	I1101 10:49:05.388723  535488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt with IP's: []
	I1101 10:49:05.544497  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt ...
	I1101 10:49:05.544528  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: {Name:mk51ba45dd6c14cf21a89025d4cd908340a0bd64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.544717  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.key ...
	I1101 10:49:05.544730  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.key: {Name:mkef223e034dabd3326eab9daab64983adec8a23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.544825  535488 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key.0601b8c1
	I1101 10:49:05.544850  535488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt.0601b8c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 10:49:05.863565  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt.0601b8c1 ...
	I1101 10:49:05.863596  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt.0601b8c1: {Name:mk9254a5bc443d9f07db240ebfd018a13e8bf5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.863765  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key.0601b8c1 ...
	I1101 10:49:05.863781  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key.0601b8c1: {Name:mkcc27664c24c9be4d28b11b66f6567eb79c4f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.863867  535488 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt.0601b8c1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt
	I1101 10:49:05.863954  535488 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key.0601b8c1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key
	I1101 10:49:05.864018  535488 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.key
	I1101 10:49:05.864040  535488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.crt with IP's: []
	I1101 10:49:06.170540  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.crt ...
	I1101 10:49:06.170573  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.crt: {Name:mk63530afe97c13fc8ee2daeda202fbe67a9b5b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:06.170747  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.key ...
	I1101 10:49:06.170761  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.key: {Name:mk90a2c6ef768d14b23aab641ad8dfde452d56de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:06.170954  535488 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 10:49:06.171004  535488 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:49:06.171042  535488 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:49:06.171071  535488 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 10:49:06.171687  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:49:06.189110  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:49:06.205936  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:49:06.223945  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:49:06.241394  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:49:06.258152  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:49:06.275381  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:49:06.292683  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:49:06.309424  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:49:06.325835  535488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:49:06.337890  535488 ssh_runner.go:195] Run: openssl version
	I1101 10:49:06.344626  535488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:49:06.352604  535488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:06.356194  535488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:06.356265  535488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:06.396815  535488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:49:06.405274  535488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:49:06.408874  535488 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:49:06.408956  535488 kubeadm.go:401] StartCluster: {Name:addons-780397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:49:06.409046  535488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:49:06.409117  535488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:49:06.436149  535488 cri.go:89] found id: ""
	I1101 10:49:06.436229  535488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:49:06.444267  535488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:49:06.451771  535488 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:49:06.451888  535488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:49:06.459879  535488 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:49:06.459897  535488 kubeadm.go:158] found existing configuration files:
	
	I1101 10:49:06.459950  535488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:49:06.467480  535488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:49:06.467543  535488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:49:06.474722  535488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:49:06.482252  535488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:49:06.482336  535488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:49:06.489771  535488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:49:06.497210  535488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:49:06.497302  535488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:49:06.504782  535488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:49:06.512667  535488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:49:06.512736  535488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:49:06.520162  535488 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:49:06.585511  535488 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:49:06.585790  535488 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:49:06.652678  535488 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:49:25.866906  535488 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:49:25.866964  535488 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:49:25.867056  535488 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:49:25.867132  535488 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:49:25.867168  535488 kubeadm.go:319] OS: Linux
	I1101 10:49:25.867217  535488 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:49:25.867267  535488 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:49:25.867316  535488 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:49:25.867366  535488 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:49:25.867418  535488 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:49:25.867478  535488 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:49:25.867526  535488 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:49:25.867577  535488 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:49:25.867624  535488 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:49:25.867698  535488 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:49:25.867796  535488 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:49:25.867888  535488 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:49:25.867952  535488 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:49:25.870898  535488 out.go:252]   - Generating certificates and keys ...
	I1101 10:49:25.871021  535488 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:49:25.871095  535488 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:49:25.871182  535488 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:49:25.871246  535488 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:49:25.871314  535488 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:49:25.871370  535488 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:49:25.871446  535488 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:49:25.871604  535488 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-780397 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 10:49:25.871693  535488 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:49:25.871839  535488 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-780397 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 10:49:25.871932  535488 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:49:25.872051  535488 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:49:25.872103  535488 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:49:25.872164  535488 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:49:25.872217  535488 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:49:25.872294  535488 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:49:25.872382  535488 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:49:25.872465  535488 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:49:25.872551  535488 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:49:25.872688  535488 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:49:25.872772  535488 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:49:25.876154  535488 out.go:252]   - Booting up control plane ...
	I1101 10:49:25.876273  535488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:49:25.876362  535488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:49:25.876436  535488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:49:25.876580  535488 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:49:25.876691  535488 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:49:25.876814  535488 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:49:25.876941  535488 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:49:25.876990  535488 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:49:25.877170  535488 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:49:25.877317  535488 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:49:25.877394  535488 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501375941s
	I1101 10:49:25.877523  535488 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:49:25.877630  535488 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 10:49:25.877754  535488 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:49:25.877867  535488 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:49:25.877979  535488 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.985340677s
	I1101 10:49:25.878056  535488 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.33943431s
	I1101 10:49:25.878160  535488 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501592745s
	I1101 10:49:25.878296  535488 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:49:25.878431  535488 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:49:25.878496  535488 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:49:25.878715  535488 kubeadm.go:319] [mark-control-plane] Marking the node addons-780397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:49:25.878778  535488 kubeadm.go:319] [bootstrap-token] Using token: j1qabl.r7grcx4jd7tbjvaf
	I1101 10:49:25.882602  535488 out.go:252]   - Configuring RBAC rules ...
	I1101 10:49:25.882738  535488 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:49:25.882828  535488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:49:25.882975  535488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:49:25.883107  535488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:49:25.883228  535488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:49:25.883318  535488 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:49:25.883455  535488 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:49:25.883546  535488 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:49:25.883634  535488 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:49:25.883661  535488 kubeadm.go:319] 
	I1101 10:49:25.883763  535488 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:49:25.883788  535488 kubeadm.go:319] 
	I1101 10:49:25.883900  535488 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:49:25.883924  535488 kubeadm.go:319] 
	I1101 10:49:25.883984  535488 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:49:25.884079  535488 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:49:25.884149  535488 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:49:25.884162  535488 kubeadm.go:319] 
	I1101 10:49:25.884226  535488 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:49:25.884235  535488 kubeadm.go:319] 
	I1101 10:49:25.884286  535488 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:49:25.884294  535488 kubeadm.go:319] 
	I1101 10:49:25.884352  535488 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:49:25.884458  535488 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:49:25.884557  535488 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:49:25.884567  535488 kubeadm.go:319] 
	I1101 10:49:25.884659  535488 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:49:25.884753  535488 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:49:25.884763  535488 kubeadm.go:319] 
	I1101 10:49:25.884857  535488 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j1qabl.r7grcx4jd7tbjvaf \
	I1101 10:49:25.884995  535488 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 10:49:25.885021  535488 kubeadm.go:319] 	--control-plane 
	I1101 10:49:25.885029  535488 kubeadm.go:319] 
	I1101 10:49:25.885138  535488 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:49:25.885176  535488 kubeadm.go:319] 
	I1101 10:49:25.885271  535488 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j1qabl.r7grcx4jd7tbjvaf \
	I1101 10:49:25.885397  535488 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 10:49:25.885426  535488 cni.go:84] Creating CNI manager for ""
	I1101 10:49:25.885438  535488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:49:25.890337  535488 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:49:25.893097  535488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:49:25.897215  535488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:49:25.897237  535488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:49:25.910597  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:49:26.199006  535488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:49:26.199161  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:26.199263  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-780397 minikube.k8s.io/updated_at=2025_11_01T10_49_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=addons-780397 minikube.k8s.io/primary=true
	I1101 10:49:26.354666  535488 ops.go:34] apiserver oom_adj: -16
	I1101 10:49:26.354776  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:26.855427  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:27.354949  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:27.855459  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:28.355295  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:28.855766  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:29.355099  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:29.855328  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:30.355676  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:30.855346  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:30.987167  535488 kubeadm.go:1114] duration metric: took 4.788060575s to wait for elevateKubeSystemPrivileges
	I1101 10:49:30.987193  535488 kubeadm.go:403] duration metric: took 24.578274417s to StartCluster
	I1101 10:49:30.987209  535488 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:30.987316  535488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 10:49:30.987693  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:30.987889  535488 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:49:30.988056  535488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:49:30.988314  535488 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:30.988342  535488 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 10:49:30.988414  535488 addons.go:70] Setting yakd=true in profile "addons-780397"
	I1101 10:49:30.988427  535488 addons.go:239] Setting addon yakd=true in "addons-780397"
	I1101 10:49:30.988448  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:30.988935  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:30.989430  535488 addons.go:70] Setting metrics-server=true in profile "addons-780397"
	I1101 10:49:30.989446  535488 addons.go:239] Setting addon metrics-server=true in "addons-780397"
	I1101 10:49:30.989468  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:30.989912  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:30.990057  535488 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-780397"
	I1101 10:49:30.990097  535488 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-780397"
	I1101 10:49:30.990126  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:30.990547  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:30.992906  535488 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-780397"
	I1101 10:49:30.993185  535488 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-780397"
	I1101 10:49:30.993233  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:30.993057  535488 addons.go:70] Setting cloud-spanner=true in profile "addons-780397"
	I1101 10:49:30.993066  535488 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-780397"
	I1101 10:49:30.993070  535488 addons.go:70] Setting default-storageclass=true in profile "addons-780397"
	I1101 10:49:30.993074  535488 addons.go:70] Setting gcp-auth=true in profile "addons-780397"
	I1101 10:49:30.993077  535488 addons.go:70] Setting ingress=true in profile "addons-780397"
	I1101 10:49:30.993079  535488 addons.go:70] Setting ingress-dns=true in profile "addons-780397"
	I1101 10:49:30.993082  535488 addons.go:70] Setting inspektor-gadget=true in profile "addons-780397"
	I1101 10:49:30.993113  535488 out.go:179] * Verifying Kubernetes components...
	I1101 10:49:30.993128  535488 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-780397"
	I1101 10:49:30.993141  535488 addons.go:70] Setting registry=true in profile "addons-780397"
	I1101 10:49:30.993144  535488 addons.go:70] Setting registry-creds=true in profile "addons-780397"
	I1101 10:49:30.993147  535488 addons.go:70] Setting storage-provisioner=true in profile "addons-780397"
	I1101 10:49:30.993151  535488 addons.go:70] Setting volumesnapshots=true in profile "addons-780397"
	I1101 10:49:30.993154  535488 addons.go:70] Setting volcano=true in profile "addons-780397"
	I1101 10:49:30.994744  535488 addons.go:239] Setting addon cloud-spanner=true in "addons-780397"
	I1101 10:49:30.995230  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:30.995266  535488 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-780397"
	I1101 10:49:31.002103  535488 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-780397"
	I1101 10:49:31.003013  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.003252  535488 addons.go:239] Setting addon volumesnapshots=true in "addons-780397"
	I1101 10:49:31.010265  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.010875  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002178  535488 mustload.go:66] Loading cluster: addons-780397
	I1101 10:49:31.022377  535488 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:31.022819  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.027917  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.029903  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.003407  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.044032  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002273  535488 addons.go:239] Setting addon ingress-dns=true in "addons-780397"
	I1101 10:49:31.059762  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.063761  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002287  535488 addons.go:239] Setting addon ingress=true in "addons-780397"
	I1101 10:49:31.083271  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.002303  535488 addons.go:239] Setting addon inspektor-gadget=true in "addons-780397"
	I1101 10:49:31.088317  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.088857  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.090768  535488 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 10:49:31.129441  535488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:31.002428  535488 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-780397"
	I1101 10:49:31.130105  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002541  535488 addons.go:239] Setting addon registry-creds=true in "addons-780397"
	I1101 10:49:31.142239  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.142951  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002551  535488 addons.go:239] Setting addon registry=true in "addons-780397"
	I1101 10:49:31.178775  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.179354  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002568  535488 addons.go:239] Setting addon storage-provisioner=true in "addons-780397"
	I1101 10:49:31.179493  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.179953  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.003423  535488 addons.go:239] Setting addon volcano=true in "addons-780397"
	I1101 10:49:31.197855  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.198419  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.211557  535488 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 10:49:31.212162  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.215517  535488 addons.go:239] Setting addon default-storageclass=true in "addons-780397"
	I1101 10:49:31.215558  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.217625  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.240538  535488 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 10:49:31.240663  535488 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 10:49:31.245809  535488 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 10:49:31.245834  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 10:49:31.245900  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.246121  535488 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 10:49:31.246131  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 10:49:31.246177  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.267628  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 10:49:31.267652  535488 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 10:49:31.267715  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.290783  535488 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-780397"
	I1101 10:49:31.290823  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.291230  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.310339  535488 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 10:49:31.313244  535488 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 10:49:31.313270  535488 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 10:49:31.313343  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.328988  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 10:49:31.329720  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 10:49:31.333751  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 10:49:31.333775  535488 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 10:49:31.333842  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.303381  535488 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 10:49:31.335862  535488 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 10:49:31.335940  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.303627  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.364920  535488 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 10:49:31.366913  535488 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 10:49:31.370087  535488 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 10:49:31.370102  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 10:49:31.370164  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.391567  535488 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 10:49:31.391633  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 10:49:31.391731  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.303982  535488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:49:31.392929  535488 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:49:31.392942  535488 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:49:31.393023  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.401469  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 10:49:31.401559  535488 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	W1101 10:49:31.401965  535488 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 10:49:31.433545  535488 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 10:49:31.433563  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 10:49:31.433630  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.462066  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 10:49:31.472079  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 10:49:31.472973  535488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 10:49:31.484699  535488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 10:49:31.484979  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.494074  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.510423  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 10:49:31.511014  535488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 10:49:31.523533  535488 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 10:49:31.523555  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 10:49:31.523623  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.531714  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 10:49:31.541542  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 10:49:31.545614  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 10:49:31.549432  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.550098  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 10:49:31.550114  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 10:49:31.550173  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.604086  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.605290  535488 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:49:31.605345  535488 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 10:49:31.610961  535488 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:49:31.610984  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:49:31.611051  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.615990  535488 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 10:49:31.618783  535488 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 10:49:31.618802  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 10:49:31.618870  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.636248  535488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:49:31.647331  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.648441  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.648955  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.664091  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.664707  535488 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 10:49:31.669232  535488 out.go:179]   - Using image docker.io/busybox:stable
	I1101 10:49:31.674171  535488 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 10:49:31.674195  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 10:49:31.674274  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.677675  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.723426  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.730776  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.732839  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.761005  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.765308  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.766449  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:32.148809  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 10:49:32.186115  535488 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:32.186139  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 10:49:32.191724  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 10:49:32.227068  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 10:49:32.239701  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 10:49:32.239743  535488 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 10:49:32.276562  535488 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 10:49:32.276589  535488 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 10:49:32.278057  535488 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 10:49:32.278094  535488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 10:49:32.311151  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:32.320692  535488 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 10:49:32.320716  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 10:49:32.325593  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:49:32.336711  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 10:49:32.336749  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 10:49:32.367519  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 10:49:32.387419  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 10:49:32.398795  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 10:49:32.407902  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 10:49:32.407929  535488 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 10:49:32.413231  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 10:49:32.447411  535488 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 10:49:32.447445  535488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 10:49:32.452271  535488 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 10:49:32.452295  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 10:49:32.489317  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:49:32.495796  535488 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 10:49:32.495831  535488 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 10:49:32.518895  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 10:49:32.518922  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 10:49:32.596350  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 10:49:32.596376  535488 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 10:49:32.614660  535488 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 10:49:32.614723  535488 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 10:49:32.624212  535488 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 10:49:32.624283  535488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 10:49:32.678211  535488 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.285874651s)
	I1101 10:49:32.678282  535488 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 10:49:32.679241  535488 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.042970828s)
	I1101 10:49:32.679953  535488 node_ready.go:35] waiting up to 6m0s for node "addons-780397" to be "Ready" ...
	I1101 10:49:32.692638  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 10:49:32.720683  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 10:49:32.720754  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 10:49:32.783442  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 10:49:32.789513  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 10:49:32.789578  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 10:49:32.996380  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 10:49:32.996455  535488 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 10:49:33.039502  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 10:49:33.039580  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 10:49:33.069759  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 10:49:33.183876  535488 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-780397" context rescaled to 1 replicas
	I1101 10:49:33.279900  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 10:49:33.279971  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 10:49:33.288251  535488 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 10:49:33.288326  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 10:49:33.541502  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 10:49:33.541588  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 10:49:33.580971  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 10:49:33.722911  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 10:49:33.722989  535488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 10:49:33.885611  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.736765797s)
	I1101 10:49:33.885759  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.694002347s)
	I1101 10:49:33.929682  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 10:49:33.929780  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 10:49:34.194496  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 10:49:34.194563  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 10:49:34.355343  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 10:49:34.355409  535488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 10:49:34.501269  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1101 10:49:34.755341  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:34.891137  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.664031865s)
	I1101 10:49:36.387592  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.076405603s)
	W1101 10:49:36.387628  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:36.387652  535488 retry.go:31] will retry after 225.84602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:36.387681  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.06205875s)
	I1101 10:49:36.387907  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.020363722s)
	I1101 10:49:36.388034  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.000594004s)
	I1101 10:49:36.388086  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.989260899s)
	W1101 10:49:36.428085  535488 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 10:49:36.613992  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 10:49:37.231008  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:37.554101  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.064745747s)
	I1101 10:49:37.554200  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.861471734s)
	I1101 10:49:37.554249  535488 addons.go:480] Verifying addon registry=true in "addons-780397"
	I1101 10:49:37.554319  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.770804172s)
	I1101 10:49:37.554340  535488 addons.go:480] Verifying addon metrics-server=true in "addons-780397"
	I1101 10:49:37.554258  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.14099713s)
	I1101 10:49:37.554559  535488 addons.go:480] Verifying addon ingress=true in "addons-780397"
	I1101 10:49:37.554581  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.484734129s)
	I1101 10:49:37.554942  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.973883051s)
	W1101 10:49:37.554970  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 10:49:37.554986  535488 retry.go:31] will retry after 137.300793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 10:49:37.557497  535488 out.go:179] * Verifying ingress addon...
	I1101 10:49:37.557496  535488 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-780397 service yakd-dashboard -n yakd-dashboard
	
	I1101 10:49:37.557615  535488 out.go:179] * Verifying registry addon...
	I1101 10:49:37.561410  535488 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 10:49:37.563270  535488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 10:49:37.589897  535488 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 10:49:37.589918  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:37.602381  535488 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 10:49:37.602400  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:37.693188  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 10:49:38.053529  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.552171451s)
	I1101 10:49:38.053565  535488 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-780397"
	I1101 10:49:38.056776  535488 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 10:49:38.060409  535488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 10:49:38.075263  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:38.075499  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:38.075568  535488 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 10:49:38.075580  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:38.109380  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.495345918s)
	W1101 10:49:38.109420  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:38.109440  535488 retry.go:31] will retry after 426.854895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:38.537139  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:38.569404  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:38.572206  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:38.670190  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:38.973861  535488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 10:49:38.974014  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:38.993971  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:39.066745  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:39.067993  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:39.068101  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:39.120082  535488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 10:49:39.137075  535488 addons.go:239] Setting addon gcp-auth=true in "addons-780397"
	I1101 10:49:39.137137  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:39.137640  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:39.164048  535488 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 10:49:39.164135  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:39.188419  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	W1101 10:49:39.446014  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:39.446109  535488 retry.go:31] will retry after 558.688082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:39.449824  535488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 10:49:39.452781  535488 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 10:49:39.455613  535488 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 10:49:39.455645  535488 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 10:49:39.469768  535488 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 10:49:39.469794  535488 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 10:49:39.483572  535488 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 10:49:39.483597  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 10:49:39.498861  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 10:49:39.572768  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:39.573316  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:39.573674  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 10:49:39.683890  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:39.995661  535488 addons.go:480] Verifying addon gcp-auth=true in "addons-780397"
	I1101 10:49:39.998589  535488 out.go:179] * Verifying gcp-auth addon...
	I1101 10:49:40.002449  535488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 10:49:40.005265  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:40.015138  535488 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 10:49:40.015169  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:40.112545  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:40.113481  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:40.113920  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:40.505938  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:40.567104  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:40.567655  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:40.569460  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:40.858771  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:40.858847  535488 retry.go:31] will retry after 654.780752ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:41.006301  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:41.063848  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:41.065145  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:41.066334  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:41.505511  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:41.514620  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:41.566420  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:41.568590  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:41.570058  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:42.006717  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:42.068023  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:42.069194  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:42.069716  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 10:49:42.184409  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	W1101 10:49:42.347143  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:42.347175  535488 retry.go:31] will retry after 1.753587663s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:42.508327  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:42.565646  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:42.566268  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:42.567043  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:43.006597  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:43.066384  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:43.066738  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:43.067204  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:43.505970  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:43.564990  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:43.566035  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:43.566168  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:44.007094  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:44.065497  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:44.065765  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:44.067689  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:44.101781  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:44.507058  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:44.609611  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:44.610330  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:44.610401  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:44.682821  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	W1101 10:49:44.929377  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:44.929463  535488 retry.go:31] will retry after 1.677613923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:45.008159  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:45.071006  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:45.071154  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:45.072907  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:45.506091  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:45.566171  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:45.566313  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:45.566367  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:46.009994  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:46.064860  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:46.066413  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:46.067091  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:46.506770  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:46.565613  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:46.566560  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:46.567227  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:46.607294  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 10:49:46.683829  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:47.006715  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:47.071091  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:47.071488  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:47.072214  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:47.423444  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:47.423488  535488 retry.go:31] will retry after 2.803123831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:47.505286  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:47.563947  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:47.564917  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:47.565855  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:48.008180  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:48.064511  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:48.066255  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:48.067442  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:48.506237  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:48.565446  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:48.565613  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:48.566638  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:49.006033  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:49.063839  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:49.065321  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:49.066240  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:49.183205  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:49.505352  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:49.564561  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:49.565937  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:49.566116  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:50.012508  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:50.065053  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:50.065427  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:50.066400  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:50.227762  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:50.506047  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:50.565822  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:50.565832  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:50.567960  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:51.026080  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:51.072508  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:51.072701  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:51.072897  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 10:49:51.074358  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:51.074391  535488 retry.go:31] will retry after 5.648000345s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:49:51.183255  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:51.505247  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:51.566807  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:51.567202  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:51.567277  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:52.011377  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:52.065417  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:52.065501  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:52.066467  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:52.506271  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:52.564281  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:52.565192  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:52.565900  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:53.011543  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:53.064630  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:53.065678  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:53.066846  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:53.183588  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:53.505564  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:53.564382  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:53.564711  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:53.567092  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:54.008054  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:54.064677  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:54.065851  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:54.066608  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:54.505885  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:54.606589  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:54.606719  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:54.606976  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:55.008237  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:55.064639  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:55.066870  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:55.067313  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:55.183788  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:55.506254  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:55.564380  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:55.564524  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:55.566506  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:56.007591  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:56.064140  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:56.064263  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:56.066557  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:56.506990  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:56.563966  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:56.564942  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:56.566248  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:56.722920  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:57.005669  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:57.066127  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:57.066254  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:57.067851  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:57.505575  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 10:49:57.535819  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:57.535869  535488 retry.go:31] will retry after 6.925190166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:57.563550  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:57.565824  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:57.566398  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:57.683094  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:58.007969  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:58.063534  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:58.066224  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:58.066759  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:58.505898  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:58.564550  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:58.564713  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:58.566197  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:59.007784  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:59.063877  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:59.065797  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:59.066404  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:59.505335  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:59.564152  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:59.564481  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:59.566266  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:59.683241  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:00.024548  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:00.103510  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:00.109613  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:00.110838  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:00.507625  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:00.569743  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:00.570953  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:00.571122  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:01.011683  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:01.063989  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:01.065620  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:01.066829  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:01.505623  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:01.565604  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:01.565988  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:01.567299  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:01.683303  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:02.006326  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:02.066038  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:02.066252  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:02.067312  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:02.506380  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:02.564497  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:02.564621  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:02.566386  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:03.006925  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:03.066318  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:03.066482  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:03.066548  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:03.505768  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:03.563686  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:03.564689  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:03.566405  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:04.012142  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:04.063948  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:04.065453  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:04.065905  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:04.183859  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:04.462024  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:50:04.505521  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:04.568151  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:04.568579  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:04.568642  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:05.006724  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:05.066712  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:05.067277  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:05.072847  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:05.292260  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:05.292295  535488 retry.go:31] will retry after 6.813572612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:05.505597  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:05.564098  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:05.565552  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:05.566960  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:06.009783  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:06.065519  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:06.065542  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:06.066965  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:06.506705  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:06.563922  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:06.565897  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:06.566812  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:06.683622  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:07.006702  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:07.063697  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:07.065338  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:07.066259  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:07.505606  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:07.566518  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:07.567008  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:07.567124  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:08.009248  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:08.065039  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:08.065236  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:08.066735  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:08.505996  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:08.564853  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:08.565501  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:08.566693  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:09.006341  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:09.065922  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:09.068623  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:09.069291  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 10:50:09.183144  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:09.505544  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:09.563691  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:09.565492  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:09.566380  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:10.008360  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:10.064046  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:10.065464  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:10.066555  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:10.506212  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:10.564705  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:10.564953  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:10.565826  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:11.007774  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:11.063731  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:11.065330  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:11.066147  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:11.505256  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:11.565938  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:11.566034  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:11.567398  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:11.683247  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:12.008567  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:12.063903  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:12.065389  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:12.066381  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:12.106633  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:50:12.505865  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:12.564202  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:12.566613  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:12.567197  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 10:50:12.901670  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:12.901723  535488 retry.go:31] will retry after 10.479824052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:13.006615  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:13.065793  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:13.067214  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:13.067942  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:13.506080  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:13.566449  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:13.568073  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:13.568271  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:13.683471  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:14.017562  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:14.114385  535488 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 10:50:14.114412  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:14.114839  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:14.115171  535488 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 10:50:14.115188  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:14.240232  535488 node_ready.go:49] node "addons-780397" is "Ready"
	I1101 10:50:14.240264  535488 node_ready.go:38] duration metric: took 41.560262327s for node "addons-780397" to be "Ready" ...
	I1101 10:50:14.240279  535488 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:50:14.240343  535488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:50:14.267412  535488 api_server.go:72] duration metric: took 43.279495028s to wait for apiserver process to appear ...
	I1101 10:50:14.267493  535488 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:50:14.267528  535488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 10:50:14.288425  535488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 10:50:14.296203  535488 api_server.go:141] control plane version: v1.34.1
	I1101 10:50:14.296284  535488 api_server.go:131] duration metric: took 28.769527ms to wait for apiserver health ...
	I1101 10:50:14.296308  535488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:50:14.397421  535488 system_pods.go:59] 19 kube-system pods found
	I1101 10:50:14.397524  535488 system_pods.go:61] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:14.397549  535488 system_pods.go:61] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending
	I1101 10:50:14.397586  535488 system_pods.go:61] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending
	I1101 10:50:14.397617  535488 system_pods.go:61] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending
	I1101 10:50:14.397640  535488 system_pods.go:61] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:14.397679  535488 system_pods.go:61] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:14.397778  535488 system_pods.go:61] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:14.397804  535488 system_pods.go:61] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:14.397832  535488 system_pods.go:61] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending
	I1101 10:50:14.397875  535488 system_pods.go:61] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:14.397897  535488 system_pods.go:61] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:14.397939  535488 system_pods.go:61] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:14.397967  535488 system_pods.go:61] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending
	I1101 10:50:14.397989  535488 system_pods.go:61] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending
	I1101 10:50:14.398028  535488 system_pods.go:61] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:14.398057  535488 system_pods.go:61] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:14.398082  535488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending
	I1101 10:50:14.398119  535488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:14.398152  535488 system_pods.go:61] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:14.398204  535488 system_pods.go:74] duration metric: took 101.875517ms to wait for pod list to return data ...
	I1101 10:50:14.398232  535488 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:50:14.431073  535488 default_sa.go:45] found service account: "default"
	I1101 10:50:14.431149  535488 default_sa.go:55] duration metric: took 32.894698ms for default service account to be created ...
	I1101 10:50:14.431173  535488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:50:14.476660  535488 system_pods.go:86] 19 kube-system pods found
	I1101 10:50:14.476743  535488 system_pods.go:89] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:14.476766  535488 system_pods.go:89] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending
	I1101 10:50:14.476789  535488 system_pods.go:89] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending
	I1101 10:50:14.476829  535488 system_pods.go:89] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending
	I1101 10:50:14.476848  535488 system_pods.go:89] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:14.476895  535488 system_pods.go:89] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:14.476919  535488 system_pods.go:89] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:14.476942  535488 system_pods.go:89] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:14.476978  535488 system_pods.go:89] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending
	I1101 10:50:14.477002  535488 system_pods.go:89] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:14.477022  535488 system_pods.go:89] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:14.477061  535488 system_pods.go:89] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:14.477088  535488 system_pods.go:89] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending
	I1101 10:50:14.477109  535488 system_pods.go:89] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending
	I1101 10:50:14.477153  535488 system_pods.go:89] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:14.477182  535488 system_pods.go:89] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:14.477205  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending
	I1101 10:50:14.477245  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:14.477272  535488 system_pods.go:89] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:14.477321  535488 retry.go:31] will retry after 208.676237ms: missing components: kube-dns
	I1101 10:50:14.511785  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:14.578400  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:14.579890  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:14.581070  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:14.704291  535488 system_pods.go:86] 19 kube-system pods found
	I1101 10:50:14.704374  535488 system_pods.go:89] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:14.704401  535488 system_pods.go:89] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 10:50:14.704446  535488 system_pods.go:89] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 10:50:14.704472  535488 system_pods.go:89] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 10:50:14.704494  535488 system_pods.go:89] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:14.704534  535488 system_pods.go:89] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:14.704560  535488 system_pods.go:89] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:14.704582  535488 system_pods.go:89] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:14.704623  535488 system_pods.go:89] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending
	I1101 10:50:14.704650  535488 system_pods.go:89] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:14.704671  535488 system_pods.go:89] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:14.704714  535488 system_pods.go:89] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:14.704741  535488 system_pods.go:89] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending
	I1101 10:50:14.704766  535488 system_pods.go:89] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 10:50:14.704804  535488 system_pods.go:89] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:14.704833  535488 system_pods.go:89] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:14.704882  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:14.704921  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:14.704957  535488 system_pods.go:89] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:14.704995  535488 retry.go:31] will retry after 379.442354ms: missing components: kube-dns
	I1101 10:50:15.032009  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:15.138380  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:15.138633  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:15.138755  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:15.139456  535488 system_pods.go:86] 19 kube-system pods found
	I1101 10:50:15.139520  535488 system_pods.go:89] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:15.139547  535488 system_pods.go:89] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 10:50:15.139587  535488 system_pods.go:89] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 10:50:15.139614  535488 system_pods.go:89] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 10:50:15.139637  535488 system_pods.go:89] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:15.139675  535488 system_pods.go:89] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:15.139702  535488 system_pods.go:89] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:15.139723  535488 system_pods.go:89] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:15.139764  535488 system_pods.go:89] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 10:50:15.139789  535488 system_pods.go:89] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:15.139814  535488 system_pods.go:89] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:15.139851  535488 system_pods.go:89] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:15.139886  535488 system_pods.go:89] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 10:50:15.139927  535488 system_pods.go:89] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 10:50:15.139953  535488 system_pods.go:89] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:15.139976  535488 system_pods.go:89] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:15.140013  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:15.140042  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:15.140067  535488 system_pods.go:89] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:15.140112  535488 retry.go:31] will retry after 363.623427ms: missing components: kube-dns
	I1101 10:50:15.519911  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:15.529973  535488 system_pods.go:86] 19 kube-system pods found
	I1101 10:50:15.530049  535488 system_pods.go:89] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Running
	I1101 10:50:15.530078  535488 system_pods.go:89] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 10:50:15.530102  535488 system_pods.go:89] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 10:50:15.530147  535488 system_pods.go:89] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 10:50:15.530166  535488 system_pods.go:89] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:15.530187  535488 system_pods.go:89] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:15.530219  535488 system_pods.go:89] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:15.530244  535488 system_pods.go:89] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:15.530269  535488 system_pods.go:89] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 10:50:15.530290  535488 system_pods.go:89] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:15.530323  535488 system_pods.go:89] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:15.530352  535488 system_pods.go:89] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:15.530376  535488 system_pods.go:89] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 10:50:15.530449  535488 system_pods.go:89] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 10:50:15.530478  535488 system_pods.go:89] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:15.530502  535488 system_pods.go:89] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:15.530524  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:15.530559  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:15.530583  535488 system_pods.go:89] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Running
	I1101 10:50:15.530608  535488 system_pods.go:126] duration metric: took 1.09941479s to wait for k8s-apps to be running ...
	I1101 10:50:15.530630  535488 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:50:15.530720  535488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:50:15.546635  535488 system_svc.go:56] duration metric: took 15.994152ms WaitForService to wait for kubelet
	I1101 10:50:15.546667  535488 kubeadm.go:587] duration metric: took 44.558756225s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:50:15.546687  535488 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:50:15.550156  535488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:50:15.550205  535488 node_conditions.go:123] node cpu capacity is 2
	I1101 10:50:15.550217  535488 node_conditions.go:105] duration metric: took 3.524236ms to run NodePressure ...
	I1101 10:50:15.550229  535488 start.go:242] waiting for startup goroutines ...
	I1101 10:50:15.618269  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:15.622677  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:15.623738  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:16.007652  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:16.065649  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:16.066347  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:16.068685  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:16.506495  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:16.608405  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:16.608921  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:16.609357  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:17.007797  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:17.069546  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:17.070084  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:17.070650  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:17.515409  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:17.617979  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:17.618349  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:17.618461  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:18.009164  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:18.071651  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:18.071921  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:18.073420  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:18.508085  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:18.571834  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:18.572213  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:18.572295  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:19.008137  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:19.065004  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:19.065166  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:19.066943  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:19.511507  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:19.611780  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:19.612211  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:19.612564  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:20.014841  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:20.067667  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:20.067879  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:20.067964  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:20.506710  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:20.608608  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:20.608844  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:20.609833  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:21.007419  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:21.067563  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:21.067989  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:21.069864  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:21.506221  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:21.567307  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:21.568597  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:21.571366  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:22.006960  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:22.068027  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:22.068543  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:22.071337  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:22.506255  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:22.566957  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:22.567524  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:22.568943  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:23.007476  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:23.064690  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:23.067434  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:23.067846  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:23.382200  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:50:23.506019  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:23.566896  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:23.567060  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:23.567524  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:24.006403  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:24.066550  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:24.066909  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:24.067120  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:24.466632  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.084393942s)
	W1101 10:50:24.466666  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:24.466683  535488 retry.go:31] will retry after 18.741980911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:24.505763  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:24.568139  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:24.568297  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:24.568790  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:25.007136  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:25.068356  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:25.073243  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:25.073340  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:25.505651  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:25.565610  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:25.567310  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:25.567534  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:26.006734  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:26.069291  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:26.069499  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:26.070053  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:26.505447  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:26.568493  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:26.569413  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:26.570139  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:27.006954  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:27.067543  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:27.067764  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:27.068126  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:27.505756  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:27.563979  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:27.567595  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:27.567844  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:28.006829  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:28.064837  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:28.067641  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:28.067896  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:28.506028  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:28.565547  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:28.565808  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:28.567926  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:29.008253  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:29.110160  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:29.110417  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:29.110553  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:29.506243  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:29.566080  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:29.567135  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:29.568506  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:30.027441  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:30.066500  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:30.066882  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:30.075539  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:30.506002  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:30.566578  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:30.567145  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:30.567991  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:31.007450  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:31.068247  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:31.068420  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:31.071173  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:31.506164  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:31.567483  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:31.568460  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:31.568829  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:32.007089  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:32.066601  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:32.066814  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:32.069043  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:32.507564  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:32.565838  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:32.566099  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:32.567152  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:33.007504  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:33.068365  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:33.069024  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:33.072498  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:33.506823  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:33.564843  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:33.565418  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:33.567320  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:34.012710  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:34.068193  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:34.068312  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:34.068907  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:34.506593  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:34.608006  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:34.608650  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:34.608831  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:35.010592  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:35.069556  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:35.069768  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:35.070095  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:35.506765  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:35.566116  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:35.566315  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:35.567431  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:36.006799  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:36.065088  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:36.066075  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:36.067676  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:36.505752  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:36.563919  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:36.565792  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:36.566911  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:37.026769  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:37.067144  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:37.067665  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:37.069249  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:37.506892  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:37.567667  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:37.568091  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:37.571111  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:38.008281  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:38.068294  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:38.068833  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:38.070987  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:38.506639  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:38.567289  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:38.567837  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:38.569745  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:39.007463  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:39.065127  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:39.066333  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:39.068847  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:39.509952  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:39.611961  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:39.612090  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:39.612263  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:40.019778  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:40.067284  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:40.068004  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:40.068214  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:40.505990  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:40.564776  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:40.567429  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:40.567585  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:41.006918  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:41.067377  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:41.070819  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:41.071514  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:41.506323  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:41.564213  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:41.566516  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:41.566547  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:42.007176  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:42.065378  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:42.065636  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:42.067887  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:42.506964  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:42.564048  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:42.567254  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:42.567556  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:43.008335  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:43.107952  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:43.108521  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:43.108744  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:43.208886  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:50:43.509622  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:43.566811  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:43.567203  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:43.571415  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:44.006136  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:44.065598  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:44.065847  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:44.068702  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:44.224766  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.015833632s)
	W1101 10:50:44.224805  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:44.224824  535488 retry.go:31] will retry after 25.336806971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:44.506577  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:44.567684  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:44.567898  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:44.569025  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:45.030267  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:45.128460  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:45.128947  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:45.129436  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:45.510888  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:45.567982  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:45.568253  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:45.571410  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:46.007698  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:46.066485  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:46.066935  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:46.067628  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:46.505589  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:46.564810  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:46.566893  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:46.569329  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:47.006569  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:47.067801  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:47.068248  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:47.069481  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:47.506397  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:47.565638  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:47.567632  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:47.569501  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:48.007045  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:48.067751  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:48.068450  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:48.070247  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:48.505608  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:48.567574  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:48.567894  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:48.567894  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:49.008426  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:49.066111  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:49.066494  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:49.066575  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:49.510756  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:49.610487  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:49.610714  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:49.611588  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:50.015544  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:50.116341  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:50.116465  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:50.117494  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:50.505539  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:50.566846  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:50.567125  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:50.567437  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:51.008492  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:51.064055  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:51.067579  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:51.067760  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:51.506808  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:51.609116  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:51.609482  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:51.609919  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:52.007370  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:52.064552  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:52.066487  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:52.067387  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:52.505286  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:52.566174  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:52.566320  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:52.567897  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:53.006722  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:53.066117  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:53.067504  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:53.068226  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:53.506330  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:53.568163  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:53.568322  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:53.568954  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:54.007411  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:54.067009  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:54.067267  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:54.071107  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:54.506064  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:54.566130  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:54.566202  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:54.567818  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:55.015526  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:55.115467  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:55.115884  535488 kapi.go:107] duration metric: took 1m17.552614663s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 10:50:55.115943  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:55.506546  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:55.567197  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:55.567607  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:56.007181  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:56.066314  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:56.066738  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:56.506065  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:56.565581  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:56.565869  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:57.006853  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:57.066235  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:57.066671  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:57.506171  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:57.569306  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:57.569452  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:58.010310  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:58.067260  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:58.067814  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:58.506990  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:58.567099  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:58.567764  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:59.006872  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:59.068099  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:59.068750  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:59.506194  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:59.567356  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:59.568041  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:00.077685  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:00.164575  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:00.178615  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:00.506081  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:00.565564  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:00.565739  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:01.010792  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:01.066817  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:01.067118  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:01.505470  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:01.566079  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:01.566620  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:02.011547  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:02.066072  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:02.067199  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:02.506113  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:02.570177  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:02.570621  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:03.007191  535488 kapi.go:107] duration metric: took 1m23.00474222s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 10:51:03.010437  535488 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-780397 cluster.
	I1101 10:51:03.013455  535488 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 10:51:03.016416  535488 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 10:51:03.066861  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:03.069819  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:03.566451  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:03.566907  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:04.064978  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:04.066885  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:04.565745  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:04.565919  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:05.072785  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:05.072991  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:05.566084  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:05.566287  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:06.066023  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:06.066180  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:06.565002  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:06.565164  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:07.064946  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:07.065798  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:07.572669  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:07.572854  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:08.069649  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:08.069800  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:08.564721  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:08.565391  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:09.066497  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:09.067133  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:09.561928  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:51:09.565072  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:09.565361  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:10.064482  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:10.064860  535488 kapi.go:107] duration metric: took 1m32.503450368s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 10:51:10.572201  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:10.957682  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.395720352s)
	W1101 10:51:10.957771  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:51:10.957850  535488 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 10:51:11.065233  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:11.563907  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:12.064021  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:12.564703  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:13.064488  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:13.564660  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:14.064668  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:14.564896  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:15.065108  535488 kapi.go:107] duration metric: took 1m37.004699818s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 10:51:15.068470  535488 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, ingress-dns, registry-creds, default-storageclass, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1101 10:51:15.071496  535488 addons.go:515] duration metric: took 1m44.083129454s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner ingress-dns registry-creds default-storageclass storage-provisioner metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1101 10:51:15.071557  535488 start.go:247] waiting for cluster config update ...
	I1101 10:51:15.071578  535488 start.go:256] writing updated cluster config ...
	I1101 10:51:15.071923  535488 ssh_runner.go:195] Run: rm -f paused
	I1101 10:51:15.076563  535488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:51:15.082364  535488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k9m58" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.096109  535488 pod_ready.go:94] pod "coredns-66bc5c9577-k9m58" is "Ready"
	I1101 10:51:15.096192  535488 pod_ready.go:86] duration metric: took 13.749482ms for pod "coredns-66bc5c9577-k9m58" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.099587  535488 pod_ready.go:83] waiting for pod "etcd-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.105050  535488 pod_ready.go:94] pod "etcd-addons-780397" is "Ready"
	I1101 10:51:15.105133  535488 pod_ready.go:86] duration metric: took 5.462569ms for pod "etcd-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.108370  535488 pod_ready.go:83] waiting for pod "kube-apiserver-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.113811  535488 pod_ready.go:94] pod "kube-apiserver-addons-780397" is "Ready"
	I1101 10:51:15.113889  535488 pod_ready.go:86] duration metric: took 5.494536ms for pod "kube-apiserver-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.117622  535488 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.481402  535488 pod_ready.go:94] pod "kube-controller-manager-addons-780397" is "Ready"
	I1101 10:51:15.481431  535488 pod_ready.go:86] duration metric: took 363.685014ms for pod "kube-controller-manager-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.683036  535488 pod_ready.go:83] waiting for pod "kube-proxy-x5kx4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:16.082835  535488 pod_ready.go:94] pod "kube-proxy-x5kx4" is "Ready"
	I1101 10:51:16.082874  535488 pod_ready.go:86] duration metric: took 399.80523ms for pod "kube-proxy-x5kx4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:16.281472  535488 pod_ready.go:83] waiting for pod "kube-scheduler-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:16.681143  535488 pod_ready.go:94] pod "kube-scheduler-addons-780397" is "Ready"
	I1101 10:51:16.681171  535488 pod_ready.go:86] duration metric: took 399.673207ms for pod "kube-scheduler-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:16.681186  535488 pod_ready.go:40] duration metric: took 1.604589473s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:51:16.734645  535488 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:51:16.738749  535488 out.go:179] * Done! kubectl is now configured to use "addons-780397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:54:25 addons-780397 crio[830]: time="2025-11-01T10:54:25.706999756Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=fd2846dc-4932-4d9d-99a5-93308990bd81 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:54:25 addons-780397 crio[830]: time="2025-11-01T10:54:25.709941517Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=3ac9a71c-f99e-4c35-a525-929f890e4ed6 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:54:25 addons-780397 crio[830]: time="2025-11-01T10:54:25.712019084Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-dlcvd/registry-creds" id=284a6f7f-e85c-4469-8fbf-8a4e93f8de50 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:54:25 addons-780397 crio[830]: time="2025-11-01T10:54:25.712200822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:54:25 addons-780397 crio[830]: time="2025-11-01T10:54:25.730237694Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:54:25 addons-780397 crio[830]: time="2025-11-01T10:54:25.730816Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:54:25 addons-780397 crio[830]: time="2025-11-01T10:54:25.766398718Z" level=info msg="Created container 4905ab1fe9383f5d8280ec12237625344cd2e072d520ee505e3bf9d66b39c2d8: kube-system/registry-creds-764b6fb674-dlcvd/registry-creds" id=284a6f7f-e85c-4469-8fbf-8a4e93f8de50 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:54:25 addons-780397 crio[830]: time="2025-11-01T10:54:25.767648492Z" level=info msg="Starting container: 4905ab1fe9383f5d8280ec12237625344cd2e072d520ee505e3bf9d66b39c2d8" id=998ebf5f-02b3-4952-84a7-494e1fdf5778 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:54:25 addons-780397 crio[830]: time="2025-11-01T10:54:25.769479081Z" level=info msg="Started container" PID=7246 containerID=4905ab1fe9383f5d8280ec12237625344cd2e072d520ee505e3bf9d66b39c2d8 description=kube-system/registry-creds-764b6fb674-dlcvd/registry-creds id=998ebf5f-02b3-4952-84a7-494e1fdf5778 name=/runtime.v1.RuntimeService/StartContainer sandboxID=12787763937e299026185f158fc135acfeab6b3b94e9705f499894f8209b9217
	Nov 01 10:54:25 addons-780397 conmon[7244]: conmon 4905ab1fe9383f5d8280 <ninfo>: container 7246 exited with status 1
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.261931891Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=1ff0dd77-acc3-4c9d-8388-a89469d69680 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.262509581Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=26434d80-6e32-4d37-be0b-0a7ed278852c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.267232273Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4bf582e0-014d-44b9-a043-650a49141821 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.277896332Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-lprng/hello-world-app" id=29ba60e3-5777-4d5d-9466-a459a9a9ddab name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.278029019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.285070283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.285291488Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2c8a78b75b8e9beaf0789c4691d391902fe7e0f3f8385709f99de31582433a8d/merged/etc/passwd: no such file or directory"
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.285323537Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2c8a78b75b8e9beaf0789c4691d391902fe7e0f3f8385709f99de31582433a8d/merged/etc/group: no such file or directory"
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.28563258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.309269366Z" level=info msg="Created container d098def721eaa23cf423618c9807f7c218a764985b09e13e86baeafd078a3993: default/hello-world-app-5d498dc89-lprng/hello-world-app" id=29ba60e3-5777-4d5d-9466-a459a9a9ddab name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.311936079Z" level=info msg="Starting container: d098def721eaa23cf423618c9807f7c218a764985b09e13e86baeafd078a3993" id=718cc63a-b865-449c-91d4-756b7a8df92c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.333885424Z" level=info msg="Started container" PID=7296 containerID=d098def721eaa23cf423618c9807f7c218a764985b09e13e86baeafd078a3993 description=default/hello-world-app-5d498dc89-lprng/hello-world-app id=718cc63a-b865-449c-91d4-756b7a8df92c name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c35a76998c3f56aa74e71cd1f70a8038c334f86937c07c81beff16a856429cc
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.728651219Z" level=info msg="Removing container: b532763c0b57ebda6b056d7d5662c3053f312ddad4a4a7f2c6ccbcea1331a807" id=e465c373-77f7-4106-aef9-fdeb6b4523bc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.761200081Z" level=info msg="Error loading conmon cgroup of container b532763c0b57ebda6b056d7d5662c3053f312ddad4a4a7f2c6ccbcea1331a807: cgroup deleted" id=e465c373-77f7-4106-aef9-fdeb6b4523bc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:54:26 addons-780397 crio[830]: time="2025-11-01T10:54:26.793551517Z" level=info msg="Removed container b532763c0b57ebda6b056d7d5662c3053f312ddad4a4a7f2c6ccbcea1331a807: kube-system/registry-creds-764b6fb674-dlcvd/registry-creds" id=e465c373-77f7-4106-aef9-fdeb6b4523bc name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	d098def721eaa       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   3c35a76998c3f       hello-world-app-5d498dc89-lprng             default
	4905ab1fe9383       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             1 second ago             Exited              registry-creds                           1                   12787763937e2       registry-creds-764b6fb674-dlcvd             kube-system
	4c9021ce1b143       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   e91a4e595481d       nginx                                       default
	d1d217a3ef36f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   44395057aec6d       busybox                                     default
	95c401b65b6d0       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   0bb25b4390104       csi-hostpathplugin-rcv72                    kube-system
	9755d6ed77411       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   0bb25b4390104       csi-hostpathplugin-rcv72                    kube-system
	24eb361f78f37       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   0bb25b4390104       csi-hostpathplugin-rcv72                    kube-system
	aa5242c774ec5       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   0bb25b4390104       csi-hostpathplugin-rcv72                    kube-system
	eb322b2b4d349       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   4a650f91b3880       ingress-nginx-controller-675c5ddd98-hs7kh   ingress-nginx
	edf5a75a78d04       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   26008fb2535d3       gcp-auth-78565c9fb4-cbqfl                   gcp-auth
	c5690aa550023       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   0bb25b4390104       csi-hostpathplugin-rcv72                    kube-system
	d0f4a3d46de3d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   eb3fe8d85fcec       gadget-9w9vd                                gadget
	06297cda80172       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   dc76f27639aee       registry-proxy-w5qfc                        kube-system
	109ca94f2ac60       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   83826fab6dc70       csi-hostpath-resizer-0                      kube-system
	8c5122f8790f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   620dcfa26f384       snapshot-controller-7d9fbc56b8-4wv2x        kube-system
	ae7007dc0baff       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   ab76fd43d3784       yakd-dashboard-5ff678cb9-pkd7z              yakd-dashboard
	9226b4f612a88       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   c2b1db8174646       snapshot-controller-7d9fbc56b8-k9qvr        kube-system
	e9f3d5cb96605       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              patch                                    0                   b33c3031aa6a8       ingress-nginx-admission-patch-gck89         ingress-nginx
	37f3bb87ae1e0       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   408490db506c9       csi-hostpath-attacher-0                     kube-system
	725ca44578089       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   0bb25b4390104       csi-hostpathplugin-rcv72                    kube-system
	2f6544622dcfd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   0b18e5aec929b       ingress-nginx-admission-create-gmhvg        ingress-nginx
	f570fa47b541d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   e48d1c67d745d       local-path-provisioner-648f6765c9-5rtmm     local-path-storage
	de45b5e729e5c       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   23f26a2d1bfd3       nvidia-device-plugin-daemonset-wx5mc        kube-system
	c7a8e262c1c24       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   e6c54ff21b1c3       cloud-spanner-emulator-86bd5cbb97-g4v8z     default
	20dc20a6da2fd       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   fca14ba697e2a       kube-ingress-dns-minikube                   kube-system
	ed4831c43c9c3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   0733afa0c1b95       registry-6b586f9694-px94l                   kube-system
	eae7ef5c0407f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   8d0fda4883ae5       metrics-server-85b7d694d7-lzfmm             kube-system
	c0ebe38f484ad       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   f5019695ad30e       storage-provisioner                         kube-system
	63f495cb67067       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   040b9af5ad20c       coredns-66bc5c9577-k9m58                    kube-system
	9219d1677a776       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   c27637e698530       kube-proxy-x5kx4                            kube-system
	d1fceb6cb01a8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   3f3e55a8194a9       kindnet-lvd2k                               kube-system
	45b9a03f6e493       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   5499795fbadc2       kube-scheduler-addons-780397                kube-system
	47b214409da44       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   76116c436e52a       kube-controller-manager-addons-780397       kube-system
	1d05f7b649fbf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   1cdae0f5b4964       kube-apiserver-addons-780397                kube-system
	ee87b767b30b5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   b10fdb8c31f45       etcd-addons-780397                          kube-system
	
	
	==> coredns [63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07] <==
	[INFO] 10.244.0.18:53473 - 18863 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001812063s
	[INFO] 10.244.0.18:53473 - 41547 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000245533s
	[INFO] 10.244.0.18:53473 - 63189 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00037576s
	[INFO] 10.244.0.18:57465 - 54963 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162103s
	[INFO] 10.244.0.18:57465 - 54750 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00023801s
	[INFO] 10.244.0.18:33156 - 12142 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106742s
	[INFO] 10.244.0.18:33156 - 11945 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091693s
	[INFO] 10.244.0.18:56171 - 60054 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083021s
	[INFO] 10.244.0.18:56171 - 59841 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000207626s
	[INFO] 10.244.0.18:57222 - 26390 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005768346s
	[INFO] 10.244.0.18:57222 - 26851 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005864233s
	[INFO] 10.244.0.18:43306 - 29307 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160322s
	[INFO] 10.244.0.18:43306 - 29462 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000149827s
	[INFO] 10.244.0.21:57470 - 34275 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000262133s
	[INFO] 10.244.0.21:58428 - 54324 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195195s
	[INFO] 10.244.0.21:56292 - 55666 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000168823s
	[INFO] 10.244.0.21:34167 - 4598 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000254215s
	[INFO] 10.244.0.21:59520 - 27548 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149565s
	[INFO] 10.244.0.21:38761 - 12195 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096124s
	[INFO] 10.244.0.21:58465 - 360 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002372719s
	[INFO] 10.244.0.21:45175 - 10089 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002543996s
	[INFO] 10.244.0.21:37461 - 3910 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001625993s
	[INFO] 10.244.0.21:49447 - 56419 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001789277s
	[INFO] 10.244.0.23:41165 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001066608s
	[INFO] 10.244.0.23:39984 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000184077s
	
	
	==> describe nodes <==
	Name:               addons-780397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-780397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=addons-780397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_49_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-780397
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-780397"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:49:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-780397
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:54:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:52:39 +0000   Sat, 01 Nov 2025 10:49:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:52:39 +0000   Sat, 01 Nov 2025 10:49:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:52:39 +0000   Sat, 01 Nov 2025 10:49:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:52:39 +0000   Sat, 01 Nov 2025 10:50:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-780397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                36b93fca-ca40-4c07-9468-4e940368c507
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     cloud-spanner-emulator-86bd5cbb97-g4v8z      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  default                     hello-world-app-5d498dc89-lprng              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-9w9vd                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  gcp-auth                    gcp-auth-78565c9fb4-cbqfl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-hs7kh    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m50s
	  kube-system                 coredns-66bc5c9577-k9m58                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m57s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-rcv72                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 etcd-addons-780397                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m2s
	  kube-system                 kindnet-lvd2k                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m57s
	  kube-system                 kube-apiserver-addons-780397                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-controller-manager-addons-780397        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-x5kx4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-scheduler-addons-780397                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 metrics-server-85b7d694d7-lzfmm              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m52s
	  kube-system                 nvidia-device-plugin-daemonset-wx5mc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 registry-6b586f9694-px94l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-creds-764b6fb674-dlcvd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-proxy-w5qfc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-4wv2x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-k9qvr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  local-path-storage          local-path-provisioner-648f6765c9-5rtmm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-pkd7z               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node addons-780397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node addons-780397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s (x8 over 5m9s)  kubelet          Node addons-780397 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m2s                 kubelet          Node addons-780397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m2s                 kubelet          Node addons-780397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m2s                 kubelet          Node addons-780397 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m58s                node-controller  Node addons-780397 event: Registered Node addons-780397 in Controller
	  Normal   NodeReady                4m14s                kubelet          Node addons-780397 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[  +9.289237] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:40] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[ +12.370416] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d] <==
	{"level":"warn","ts":"2025-11-01T10:49:21.608037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.618660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.641883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.656394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.686299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.696093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.709893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.722503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.746054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.761180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.778936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.791819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.825931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.846701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.864758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.896361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.921571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.940378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:22.041902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:38.212883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:38.229228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:59.755258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:59.770457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:59.801756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:59.821881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41794","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [edf5a75a78d04e9d3c1cd09c9d4a5accd533078f752949605a4cba64e6501d81] <==
	2025/11/01 10:51:02 GCP Auth Webhook started!
	2025/11/01 10:51:17 Ready to marshal response ...
	2025/11/01 10:51:17 Ready to write response ...
	2025/11/01 10:51:17 Ready to marshal response ...
	2025/11/01 10:51:17 Ready to write response ...
	2025/11/01 10:51:17 Ready to marshal response ...
	2025/11/01 10:51:17 Ready to write response ...
	2025/11/01 10:51:39 Ready to marshal response ...
	2025/11/01 10:51:39 Ready to write response ...
	2025/11/01 10:51:40 Ready to marshal response ...
	2025/11/01 10:51:40 Ready to write response ...
	2025/11/01 10:51:40 Ready to marshal response ...
	2025/11/01 10:51:40 Ready to write response ...
	2025/11/01 10:51:49 Ready to marshal response ...
	2025/11/01 10:51:49 Ready to write response ...
	2025/11/01 10:51:56 Ready to marshal response ...
	2025/11/01 10:51:56 Ready to write response ...
	2025/11/01 10:52:05 Ready to marshal response ...
	2025/11/01 10:52:05 Ready to write response ...
	2025/11/01 10:52:28 Ready to marshal response ...
	2025/11/01 10:52:28 Ready to write response ...
	2025/11/01 10:54:25 Ready to marshal response ...
	2025/11/01 10:54:25 Ready to write response ...
	
	
	==> kernel <==
	 10:54:27 up  2:36,  0 user,  load average: 0.40, 1.80, 2.79
	Linux addons-780397 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf] <==
	I1101 10:52:23.264949       1 main.go:301] handling current node
	I1101 10:52:33.257809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:52:33.257925       1 main.go:301] handling current node
	I1101 10:52:43.256535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:52:43.256659       1 main.go:301] handling current node
	I1101 10:52:53.264630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:52:53.264730       1 main.go:301] handling current node
	I1101 10:53:03.263873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:53:03.263915       1 main.go:301] handling current node
	I1101 10:53:13.261793       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:53:13.261828       1 main.go:301] handling current node
	I1101 10:53:23.264378       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:53:23.264414       1 main.go:301] handling current node
	I1101 10:53:33.264926       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:53:33.264964       1 main.go:301] handling current node
	I1101 10:53:43.260376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:53:43.260410       1 main.go:301] handling current node
	I1101 10:53:53.264957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:53:53.264992       1 main.go:301] handling current node
	I1101 10:54:03.256077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:54:03.256190       1 main.go:301] handling current node
	I1101 10:54:13.261894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:54:13.261930       1 main.go:301] handling current node
	I1101 10:54:23.263979       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:54:23.264014       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93] <==
	W1101 10:49:59.801656       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 10:49:59.818104       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:50:13.822786       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.145.123:443: connect: connection refused
	E1101 10:50:13.822834       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.145.123:443: connect: connection refused" logger="UnhandledError"
	W1101 10:50:13.823275       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.145.123:443: connect: connection refused
	E1101 10:50:13.823311       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.145.123:443: connect: connection refused" logger="UnhandledError"
	W1101 10:50:13.948222       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.145.123:443: connect: connection refused
	E1101 10:50:13.948349       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.145.123:443: connect: connection refused" logger="UnhandledError"
	W1101 10:50:19.586414       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 10:50:19.586542       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 10:50:19.587845       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.193.12:443: connect: connection refused" logger="UnhandledError"
	E1101 10:50:19.592942       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.193.12:443: connect: connection refused" logger="UnhandledError"
	E1101 10:50:19.595704       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.193.12:443: connect: connection refused" logger="UnhandledError"
	I1101 10:50:19.726054       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 10:51:27.660359       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45504: use of closed network connection
	E1101 10:51:27.894976       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45520: use of closed network connection
	E1101 10:51:28.027991       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45536: use of closed network connection
	I1101 10:52:04.990802       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 10:52:05.305993       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.147.237"}
	I1101 10:52:07.701945       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1101 10:52:36.353371       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1101 10:54:25.282159       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.84.49"}
	
	
	==> kube-controller-manager [47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0] <==
	I1101 10:49:29.776092       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:49:29.777220       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:49:29.777265       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:49:29.784701       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:49:29.784802       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:49:29.784819       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:49:29.785351       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:49:29.785818       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-780397"
	I1101 10:49:29.785913       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:49:29.785286       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:49:29.786209       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:49:29.787352       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:49:29.788474       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:49:29.788646       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:49:29.789963       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:49:29.792934       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:49:29.800228       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	E1101 10:49:59.747121       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 10:49:59.747295       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 10:49:59.747338       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 10:49:59.790641       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 10:49:59.794979       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 10:49:59.847591       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:49:59.895567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:50:14.832245       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495] <==
	I1101 10:49:33.099962       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:49:33.200607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:49:33.301242       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:49:33.302255       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 10:49:33.302325       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:49:33.390387       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:49:33.390458       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:49:33.401869       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:49:33.408444       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:49:33.408478       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:49:33.415603       1 config.go:200] "Starting service config controller"
	I1101 10:49:33.415623       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:49:33.415644       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:49:33.415649       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:49:33.415666       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:49:33.415670       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:49:33.416330       1 config.go:309] "Starting node config controller"
	I1101 10:49:33.416353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:49:33.416359       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:49:33.515757       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:49:33.515800       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:49:33.515833       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6] <==
	I1101 10:49:23.271434       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:49:23.273768       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:49:23.273801       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:49:23.274254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:49:23.274372       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1101 10:49:23.282895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:49:23.292078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:49:23.292282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:49:23.292682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:49:23.292701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:49:23.292832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:49:23.293024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:49:23.293027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:49:23.293072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:49:23.293153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:49:23.293229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:49:23.293342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:49:23.293451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:49:23.293489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:49:23.293503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:49:23.293558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:49:23.293596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:49:23.293682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:49:23.293753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1101 10:49:24.974034       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:52:36 addons-780397 kubelet[1289]: I1101 10:52:36.315182    1289 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-069bd1e0-be7f-45b1-95cc-17ca831b5088" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^dbad3bb5-b710-11f0-bc11-e25fe34cf963") on node "addons-780397"
	Nov 01 10:52:36 addons-780397 kubelet[1289]: I1101 10:52:36.319959    1289 scope.go:117] "RemoveContainer" containerID="137e12b254d5f9b5bff1b22548ccbd2547923ebc8ce046117d7c982a19d363d4"
	Nov 01 10:52:36 addons-780397 kubelet[1289]: I1101 10:52:36.330385    1289 scope.go:117] "RemoveContainer" containerID="137e12b254d5f9b5bff1b22548ccbd2547923ebc8ce046117d7c982a19d363d4"
	Nov 01 10:52:36 addons-780397 kubelet[1289]: E1101 10:52:36.331082    1289 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137e12b254d5f9b5bff1b22548ccbd2547923ebc8ce046117d7c982a19d363d4\": container with ID starting with 137e12b254d5f9b5bff1b22548ccbd2547923ebc8ce046117d7c982a19d363d4 not found: ID does not exist" containerID="137e12b254d5f9b5bff1b22548ccbd2547923ebc8ce046117d7c982a19d363d4"
	Nov 01 10:52:36 addons-780397 kubelet[1289]: I1101 10:52:36.331120    1289 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137e12b254d5f9b5bff1b22548ccbd2547923ebc8ce046117d7c982a19d363d4"} err="failed to get container status \"137e12b254d5f9b5bff1b22548ccbd2547923ebc8ce046117d7c982a19d363d4\": rpc error: code = NotFound desc = could not find container \"137e12b254d5f9b5bff1b22548ccbd2547923ebc8ce046117d7c982a19d363d4\": container with ID starting with 137e12b254d5f9b5bff1b22548ccbd2547923ebc8ce046117d7c982a19d363d4 not found: ID does not exist"
	Nov 01 10:52:36 addons-780397 kubelet[1289]: I1101 10:52:36.409500    1289 reconciler_common.go:299] "Volume detached for volume \"pvc-069bd1e0-be7f-45b1-95cc-17ca831b5088\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^dbad3bb5-b710-11f0-bc11-e25fe34cf963\") on node \"addons-780397\" DevicePath \"\""
	Nov 01 10:52:37 addons-780397 kubelet[1289]: I1101 10:52:37.217637    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9dcfbe-3968-452a-b619-0720a9336ff2" path="/var/lib/kubelet/pods/4c9dcfbe-3968-452a-b619-0720a9336ff2/volumes"
	Nov 01 10:52:42 addons-780397 kubelet[1289]: I1101 10:52:42.211916    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-px94l" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:53:07 addons-780397 kubelet[1289]: I1101 10:53:07.211476    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wx5mc" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:53:50 addons-780397 kubelet[1289]: I1101 10:53:50.211928    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-w5qfc" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:54:07 addons-780397 kubelet[1289]: I1101 10:54:07.211768    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-px94l" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:54:23 addons-780397 kubelet[1289]: I1101 10:54:23.911405    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dlcvd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:54:25 addons-780397 kubelet[1289]: I1101 10:54:25.219256    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7680ccac-221f-40ea-9b7b-a424ada20b6d-gcp-creds\") pod \"hello-world-app-5d498dc89-lprng\" (UID: \"7680ccac-221f-40ea-9b7b-a424ada20b6d\") " pod="default/hello-world-app-5d498dc89-lprng"
	Nov 01 10:54:25 addons-780397 kubelet[1289]: I1101 10:54:25.219343    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr6w7\" (UniqueName: \"kubernetes.io/projected/7680ccac-221f-40ea-9b7b-a424ada20b6d-kube-api-access-vr6w7\") pod \"hello-world-app-5d498dc89-lprng\" (UID: \"7680ccac-221f-40ea-9b7b-a424ada20b6d\") " pod="default/hello-world-app-5d498dc89-lprng"
	Nov 01 10:54:25 addons-780397 kubelet[1289]: E1101 10:54:25.407823    1289 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/69c5c82cefa3a329f45384d416382c2be8c213cadb2b4db0d2cdf86f8fa47bf6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/69c5c82cefa3a329f45384d416382c2be8c213cadb2b4db0d2cdf86f8fa47bf6/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 10:54:25 addons-780397 kubelet[1289]: I1101 10:54:25.705344    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dlcvd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:54:25 addons-780397 kubelet[1289]: I1101 10:54:25.705973    1289 scope.go:117] "RemoveContainer" containerID="b532763c0b57ebda6b056d7d5662c3053f312ddad4a4a7f2c6ccbcea1331a807"
	Nov 01 10:54:26 addons-780397 kubelet[1289]: I1101 10:54:26.719752    1289 scope.go:117] "RemoveContainer" containerID="b532763c0b57ebda6b056d7d5662c3053f312ddad4a4a7f2c6ccbcea1331a807"
	Nov 01 10:54:26 addons-780397 kubelet[1289]: I1101 10:54:26.720538    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dlcvd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:54:26 addons-780397 kubelet[1289]: I1101 10:54:26.720678    1289 scope.go:117] "RemoveContainer" containerID="4905ab1fe9383f5d8280ec12237625344cd2e072d520ee505e3bf9d66b39c2d8"
	Nov 01 10:54:26 addons-780397 kubelet[1289]: E1101 10:54:26.720924    1289 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-dlcvd_kube-system(c655c9a8-60bd-4c14-8ad8-6be6773d91c7)\"" pod="kube-system/registry-creds-764b6fb674-dlcvd" podUID="c655c9a8-60bd-4c14-8ad8-6be6773d91c7"
	Nov 01 10:54:26 addons-780397 kubelet[1289]: I1101 10:54:26.759977    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-lprng" podStartSLOduration=1.056962142 podStartE2EDuration="1.759957003s" podCreationTimestamp="2025-11-01 10:54:25 +0000 UTC" firstStartedPulling="2025-11-01 10:54:25.563244905 +0000 UTC m=+300.465995734" lastFinishedPulling="2025-11-01 10:54:26.266239766 +0000 UTC m=+301.168990595" observedRunningTime="2025-11-01 10:54:26.758355602 +0000 UTC m=+301.661106439" watchObservedRunningTime="2025-11-01 10:54:26.759957003 +0000 UTC m=+301.662707840"
	Nov 01 10:54:27 addons-780397 kubelet[1289]: I1101 10:54:27.726037    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dlcvd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:54:27 addons-780397 kubelet[1289]: I1101 10:54:27.726094    1289 scope.go:117] "RemoveContainer" containerID="4905ab1fe9383f5d8280ec12237625344cd2e072d520ee505e3bf9d66b39c2d8"
	Nov 01 10:54:27 addons-780397 kubelet[1289]: E1101 10:54:27.726233    1289 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-dlcvd_kube-system(c655c9a8-60bd-4c14-8ad8-6be6773d91c7)\"" pod="kube-system/registry-creds-764b6fb674-dlcvd" podUID="c655c9a8-60bd-4c14-8ad8-6be6773d91c7"
	
	
	==> storage-provisioner [c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0] <==
	W1101 10:54:02.506386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:04.509056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:04.513487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:06.516607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:06.523491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:08.526560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:08.533053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:10.535883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:10.541009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:12.543809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:12.548753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:14.551875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:14.559771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:16.563365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:16.567635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:18.570337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:18.574658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:20.577289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:20.583850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:22.586590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:22.590855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:24.593507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:24.598358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:26.607684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:54:26.617377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-780397 -n addons-780397
helpers_test.go:269: (dbg) Run:  kubectl --context addons-780397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-gmhvg ingress-nginx-admission-patch-gck89
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-780397 describe pod ingress-nginx-admission-create-gmhvg ingress-nginx-admission-patch-gck89
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-780397 describe pod ingress-nginx-admission-create-gmhvg ingress-nginx-admission-patch-gck89: exit status 1 (78.797502ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gmhvg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gck89" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-780397 describe pod ingress-nginx-admission-create-gmhvg ingress-nginx-admission-patch-gck89: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (263.361696ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:54:28.716308  545243 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:54:28.717023  545243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:54:28.717037  545243 out.go:374] Setting ErrFile to fd 2...
	I1101 10:54:28.717043  545243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:54:28.717326  545243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:54:28.717633  545243 mustload.go:66] Loading cluster: addons-780397
	I1101 10:54:28.718045  545243 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:54:28.718068  545243 addons.go:607] checking whether the cluster is paused
	I1101 10:54:28.718175  545243 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:54:28.718218  545243 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:54:28.718721  545243 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:54:28.739320  545243 ssh_runner.go:195] Run: systemctl --version
	I1101 10:54:28.739387  545243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:54:28.757564  545243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:54:28.864385  545243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:54:28.864465  545243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:54:28.894255  545243 cri.go:89] found id: "4905ab1fe9383f5d8280ec12237625344cd2e072d520ee505e3bf9d66b39c2d8"
	I1101 10:54:28.894317  545243 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:54:28.894328  545243 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:54:28.894333  545243 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:54:28.894336  545243 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:54:28.894340  545243 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:54:28.894349  545243 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:54:28.894353  545243 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:54:28.894356  545243 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:54:28.894366  545243 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:54:28.894393  545243 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:54:28.894415  545243 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:54:28.894426  545243 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:54:28.894430  545243 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:54:28.894433  545243 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:54:28.894438  545243 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:54:28.894441  545243 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:54:28.894445  545243 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:54:28.894448  545243 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:54:28.894451  545243 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:54:28.894457  545243 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:54:28.894460  545243 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:54:28.894464  545243 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:54:28.894467  545243 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:54:28.894470  545243 cri.go:89] found id: ""
	I1101 10:54:28.894523  545243 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:54:28.910201  545243 out.go:203] 
	W1101 10:54:28.913203  545243 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:54:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:54:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:54:28.913238  545243 out.go:285] * 
	* 
	W1101 10:54:28.920521  545243 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:54:28.923667  545243 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable ingress --alsologtostderr -v=1: exit status 11 (257.305906ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:54:28.978975  545285 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:54:28.979749  545285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:54:28.979787  545285 out.go:374] Setting ErrFile to fd 2...
	I1101 10:54:28.979811  545285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:54:28.980219  545285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:54:28.980614  545285 mustload.go:66] Loading cluster: addons-780397
	I1101 10:54:28.981765  545285 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:54:28.981797  545285 addons.go:607] checking whether the cluster is paused
	I1101 10:54:28.981987  545285 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:54:28.982007  545285 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:54:28.982502  545285 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:54:29.000635  545285 ssh_runner.go:195] Run: systemctl --version
	I1101 10:54:29.000689  545285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:54:29.018683  545285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:54:29.124454  545285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:54:29.124568  545285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:54:29.154886  545285 cri.go:89] found id: "4905ab1fe9383f5d8280ec12237625344cd2e072d520ee505e3bf9d66b39c2d8"
	I1101 10:54:29.154960  545285 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:54:29.154979  545285 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:54:29.155011  545285 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:54:29.155039  545285 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:54:29.155063  545285 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:54:29.155082  545285 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:54:29.155101  545285 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:54:29.155119  545285 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:54:29.155151  545285 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:54:29.155177  545285 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:54:29.155198  545285 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:54:29.155218  545285 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:54:29.155237  545285 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:54:29.155264  545285 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:54:29.155288  545285 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:54:29.155316  545285 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:54:29.155337  545285 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:54:29.155367  545285 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:54:29.155392  545285 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:54:29.155417  545285 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:54:29.155436  545285 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:54:29.155456  545285 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:54:29.155488  545285 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:54:29.155505  545285 cri.go:89] found id: ""
	I1101 10:54:29.155596  545285 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:54:29.170595  545285 out.go:203] 
	W1101 10:54:29.173636  545285 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:54:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:54:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:54:29.173659  545285 out.go:285] * 
	* 
	W1101 10:54:29.180756  545285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:54:29.183649  545285 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.51s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9w9vd" [bb23bb34-66fd-404d-99c5-51fd29c930bf] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00354966s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (255.331868ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:52:04.477200  543198 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:52:04.478115  543198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:04.478157  543198 out.go:374] Setting ErrFile to fd 2...
	I1101 10:52:04.478179  543198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:04.478472  543198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:52:04.478805  543198 mustload.go:66] Loading cluster: addons-780397
	I1101 10:52:04.479212  543198 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:04.479254  543198 addons.go:607] checking whether the cluster is paused
	I1101 10:52:04.479399  543198 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:04.479436  543198 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:52:04.479974  543198 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:52:04.497660  543198 ssh_runner.go:195] Run: systemctl --version
	I1101 10:52:04.497760  543198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:52:04.516685  543198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:52:04.620245  543198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:04.620334  543198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:04.649268  543198 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:52:04.649292  543198 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:52:04.649298  543198 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:52:04.649311  543198 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:52:04.649316  543198 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:52:04.649320  543198 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:52:04.649324  543198 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:52:04.649327  543198 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:52:04.649331  543198 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:52:04.649339  543198 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:52:04.649344  543198 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:52:04.649348  543198 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:52:04.649358  543198 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:52:04.649361  543198 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:52:04.649364  543198 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:52:04.649369  543198 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:52:04.649384  543198 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:52:04.649389  543198 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:52:04.649392  543198 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:52:04.649395  543198 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:52:04.649401  543198 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:52:04.649409  543198 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:52:04.649412  543198 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:52:04.649415  543198 cri.go:89] found id: ""
	I1101 10:52:04.649473  543198 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:52:04.664185  543198 out.go:203] 
	W1101 10:52:04.667019  543198 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:52:04.667046  543198 out.go:285] * 
	* 
	W1101 10:52:04.674049  543198 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:52:04.676955  543198 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.700633ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004098291s
addons_test.go:463: (dbg) Run:  kubectl --context addons-780397 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (324.808762ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:58.166585  543031 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:58.167386  543031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:58.167395  543031 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:58.167401  543031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:58.167671  543031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:51:58.167943  543031 mustload.go:66] Loading cluster: addons-780397
	I1101 10:51:58.168303  543031 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:58.168317  543031 addons.go:607] checking whether the cluster is paused
	I1101 10:51:58.168416  543031 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:58.168426  543031 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:51:58.168909  543031 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:51:58.187223  543031 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:58.187288  543031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:51:58.209645  543031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:51:58.327937  543031 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:58.328049  543031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:58.383218  543031 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:51:58.383238  543031 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:51:58.383243  543031 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:51:58.383247  543031 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:51:58.383251  543031 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:51:58.383255  543031 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:51:58.383259  543031 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:51:58.383268  543031 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:51:58.383271  543031 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:51:58.383280  543031 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:51:58.383295  543031 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:51:58.383298  543031 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:51:58.383302  543031 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:51:58.383306  543031 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:51:58.383317  543031 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:51:58.383326  543031 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:51:58.383330  543031 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:51:58.383335  543031 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:51:58.383338  543031 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:51:58.383341  543031 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:51:58.383347  543031 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:51:58.383355  543031 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:51:58.383367  543031 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:51:58.383371  543031 cri.go:89] found id: ""
	I1101 10:51:58.383422  543031 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:58.400964  543031 out.go:203] 
	W1101 10:51:58.403991  543031 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:58.404028  543031 out.go:285] * 
	* 
	W1101 10:51:58.411369  543031 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:58.414518  543031 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 10:51:50.031690  534720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 10:51:50.052745  534720 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 10:51:50.052774  534720 kapi.go:107] duration metric: took 21.096695ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 21.107945ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-780397 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-780397 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [16b51fe8-0e29-40a0-8b9d-ec853b68b00d] Pending
helpers_test.go:352: "task-pv-pod" [16b51fe8-0e29-40a0-8b9d-ec853b68b00d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [16b51fe8-0e29-40a0-8b9d-ec853b68b00d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003420291s
addons_test.go:572: (dbg) Run:  kubectl --context addons-780397 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-780397 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-780397 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-780397 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-780397 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-780397 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-780397 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4c9dcfbe-3968-452a-b619-0720a9336ff2] Pending
helpers_test.go:352: "task-pv-pod-restore" [4c9dcfbe-3968-452a-b619-0720a9336ff2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4c9dcfbe-3968-452a-b619-0720a9336ff2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00422458s
addons_test.go:614: (dbg) Run:  kubectl --context addons-780397 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-780397 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-780397 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (264.130269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:52:36.790271  544012 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:52:36.791020  544012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:36.791060  544012 out.go:374] Setting ErrFile to fd 2...
	I1101 10:52:36.791083  544012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:36.791391  544012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:52:36.791792  544012 mustload.go:66] Loading cluster: addons-780397
	I1101 10:52:36.792235  544012 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:36.792274  544012 addons.go:607] checking whether the cluster is paused
	I1101 10:52:36.792423  544012 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:36.792455  544012 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:52:36.792982  544012 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:52:36.810722  544012 ssh_runner.go:195] Run: systemctl --version
	I1101 10:52:36.810782  544012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:52:36.831045  544012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:52:36.937204  544012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:36.937297  544012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:36.967066  544012 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:52:36.967097  544012 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:52:36.967103  544012 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:52:36.967108  544012 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:52:36.967111  544012 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:52:36.967115  544012 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:52:36.967119  544012 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:52:36.967124  544012 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:52:36.967128  544012 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:52:36.967136  544012 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:52:36.967143  544012 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:52:36.967146  544012 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:52:36.967149  544012 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:52:36.967153  544012 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:52:36.967156  544012 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:52:36.967169  544012 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:52:36.967176  544012 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:52:36.967181  544012 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:52:36.967183  544012 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:52:36.967186  544012 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:52:36.967196  544012 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:52:36.967199  544012 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:52:36.967202  544012 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:52:36.967205  544012 cri.go:89] found id: ""
	I1101 10:52:36.967267  544012 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:52:36.981878  544012 out.go:203] 
	W1101 10:52:36.984721  544012 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:52:36.984744  544012 out.go:285] * 
	* 
	W1101 10:52:36.992026  544012 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:52:36.995022  544012 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (276.932034ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:52:37.061100  544057 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:52:37.061867  544057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:37.061882  544057 out.go:374] Setting ErrFile to fd 2...
	I1101 10:52:37.061887  544057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:52:37.062232  544057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:52:37.062578  544057 mustload.go:66] Loading cluster: addons-780397
	I1101 10:52:37.063040  544057 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:37.063062  544057 addons.go:607] checking whether the cluster is paused
	I1101 10:52:37.063210  544057 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:52:37.063227  544057 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:52:37.063747  544057 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:52:37.080983  544057 ssh_runner.go:195] Run: systemctl --version
	I1101 10:52:37.081076  544057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:52:37.104742  544057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:52:37.208471  544057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:52:37.208594  544057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:52:37.244809  544057 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:52:37.244831  544057 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:52:37.244836  544057 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:52:37.244849  544057 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:52:37.244853  544057 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:52:37.244857  544057 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:52:37.244859  544057 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:52:37.244862  544057 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:52:37.244866  544057 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:52:37.244872  544057 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:52:37.244875  544057 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:52:37.244878  544057 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:52:37.244882  544057 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:52:37.244886  544057 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:52:37.244895  544057 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:52:37.244900  544057 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:52:37.244905  544057 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:52:37.244909  544057 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:52:37.244913  544057 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:52:37.244916  544057 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:52:37.244920  544057 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:52:37.244923  544057 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:52:37.244927  544057 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:52:37.244931  544057 cri.go:89] found id: ""
	I1101 10:52:37.244981  544057 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:52:37.260541  544057 out.go:203] 
	W1101 10:52:37.263425  544057 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:52:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:52:37.263448  544057 out.go:285] * 
	* 
	W1101 10:52:37.270633  544057 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:52:37.273758  544057 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (47.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-780397 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-780397 --alsologtostderr -v=1: exit status 11 (308.741323ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:49.427588  542315 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:49.429413  542315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:49.429476  542315 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:49.429498  542315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:49.429917  542315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:51:49.430360  542315 mustload.go:66] Loading cluster: addons-780397
	I1101 10:51:49.430774  542315 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:49.430819  542315 addons.go:607] checking whether the cluster is paused
	I1101 10:51:49.430945  542315 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:49.430981  542315 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:51:49.431534  542315 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:51:49.454462  542315 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:49.454528  542315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:51:49.473811  542315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:51:49.588306  542315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:49.588394  542315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:49.621604  542315 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:51:49.621630  542315 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:51:49.621635  542315 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:51:49.621639  542315 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:51:49.621642  542315 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:51:49.621646  542315 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:51:49.621649  542315 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:51:49.621652  542315 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:51:49.621655  542315 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:51:49.621662  542315 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:51:49.621666  542315 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:51:49.621669  542315 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:51:49.621672  542315 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:51:49.621676  542315 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:51:49.621679  542315 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:51:49.621684  542315 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:51:49.621688  542315 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:51:49.621711  542315 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:51:49.621714  542315 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:51:49.621717  542315 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:51:49.621723  542315 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:51:49.621726  542315 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:51:49.621729  542315 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:51:49.621732  542315 cri.go:89] found id: ""
	I1101 10:51:49.621784  542315 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:49.638475  542315 out.go:203] 
	W1101 10:51:49.641613  542315 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:49.641644  542315 out.go:285] * 
	* 
	W1101 10:51:49.648922  542315 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:49.652132  542315 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-780397 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-780397
helpers_test.go:243: (dbg) docker inspect addons-780397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6",
	        "Created": "2025-11-01T10:48:56.100696119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 535884,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:48:56.159996171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6/hosts",
	        "LogPath": "/var/lib/docker/containers/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6-json.log",
	        "Name": "/addons-780397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-780397:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-780397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6",
	                "LowerDir": "/var/lib/docker/overlay2/fe4ea45cdd89f2c9d1f2cb2b8be871ff8ab2c01c23869905f60e0060bf98a7f9-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe4ea45cdd89f2c9d1f2cb2b8be871ff8ab2c01c23869905f60e0060bf98a7f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe4ea45cdd89f2c9d1f2cb2b8be871ff8ab2c01c23869905f60e0060bf98a7f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe4ea45cdd89f2c9d1f2cb2b8be871ff8ab2c01c23869905f60e0060bf98a7f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-780397",
	                "Source": "/var/lib/docker/volumes/addons-780397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-780397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-780397",
	                "name.minikube.sigs.k8s.io": "addons-780397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "822dd58c1fe6787728cc98f29ab3db06ea50e99d9ff68359a4651e97910ec3c0",
	            "SandboxKey": "/var/run/docker/netns/822dd58c1fe6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-780397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:85:b9:0e:b3:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5dad9c17d41b068f1874aba9bc4d83a7bdafd82a350976f89ac87070117f67d2",
	                    "EndpointID": "55663942b33fe339acf47e76c9e79524da5bc8d3e830819463b546ccaf0c44dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-780397",
	                        "7d2662ca9bdd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-780397 -n addons-780397
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-780397 logs -n 25: (1.638916159s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-186382 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-186382   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p download-only-186382                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-186382   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -o=json --download-only -p download-only-491444 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-491444   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p download-only-491444                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-491444   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p download-only-186382                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-186382   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p download-only-491444                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-491444   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ --download-only -p download-docker-524809 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-524809 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ delete  │ -p download-docker-524809                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-524809 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ --download-only -p binary-mirror-212672 --alsologtostderr --binary-mirror http://127.0.0.1:46695 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-212672   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ delete  │ -p binary-mirror-212672                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-212672   │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ addons  │ enable dashboard -p addons-780397                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ addons  │ disable dashboard -p addons-780397                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ start   │ -p addons-780397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:51 UTC │
	│ addons  │ addons-780397 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ ip      │ addons-780397 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ addons  │ addons-780397 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ ssh     │ addons-780397 ssh cat /opt/local-path-provisioner/pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │ 01 Nov 25 10:51 UTC │
	│ addons  │ addons-780397 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ enable headlamp -p addons-780397 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	│ addons  │ addons-780397 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-780397          │ jenkins │ v1.37.0 │ 01 Nov 25 10:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:48:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:48:30.104953  535488 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:48:30.105179  535488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:30.105213  535488 out.go:374] Setting ErrFile to fd 2...
	I1101 10:48:30.105235  535488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:30.105560  535488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:48:30.106168  535488 out.go:368] Setting JSON to false
	I1101 10:48:30.107139  535488 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9059,"bootTime":1761985051,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:48:30.107259  535488 start.go:143] virtualization:  
	I1101 10:48:30.112769  535488 out.go:179] * [addons-780397] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:48:30.116013  535488 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:48:30.116054  535488 notify.go:221] Checking for updates...
	I1101 10:48:30.119120  535488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:48:30.122178  535488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 10:48:30.125111  535488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 10:48:30.128024  535488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:48:30.131091  535488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:48:30.134401  535488 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:48:30.159243  535488 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:48:30.159374  535488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:30.224418  535488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 10:48:30.210139372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:30.224520  535488 docker.go:319] overlay module found
	I1101 10:48:30.227652  535488 out.go:179] * Using the docker driver based on user configuration
	I1101 10:48:30.230407  535488 start.go:309] selected driver: docker
	I1101 10:48:30.230428  535488 start.go:930] validating driver "docker" against <nil>
	I1101 10:48:30.230443  535488 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:48:30.231170  535488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:30.291205  535488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 10:48:30.281163164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:30.291363  535488 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:48:30.291604  535488 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:48:30.294465  535488 out.go:179] * Using Docker driver with root privileges
	I1101 10:48:30.297391  535488 cni.go:84] Creating CNI manager for ""
	I1101 10:48:30.297468  535488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:48:30.297482  535488 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:48:30.297573  535488 start.go:353] cluster config:
	{Name:addons-780397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 10:48:30.302497  535488 out.go:179] * Starting "addons-780397" primary control-plane node in "addons-780397" cluster
	I1101 10:48:30.305384  535488 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:48:30.308278  535488 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:48:30.311065  535488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:48:30.311133  535488 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:48:30.311148  535488 cache.go:59] Caching tarball of preloaded images
	I1101 10:48:30.311151  535488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:48:30.311232  535488 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:48:30.311242  535488 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:48:30.311581  535488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/config.json ...
	I1101 10:48:30.311602  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/config.json: {Name:mkafaa477b09cf7e80b93a7e65a9a24fb797d1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:48:30.327051  535488 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 10:48:30.327188  535488 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 10:48:30.327211  535488 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 10:48:30.327216  535488 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 10:48:30.327224  535488 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 10:48:30.327229  535488 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 10:48:48.149412  535488 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 10:48:48.149467  535488 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:48:48.149497  535488 start.go:360] acquireMachinesLock for addons-780397: {Name:mk3b3a54a349679dc1852b86688785584ad3651f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:48:48.149631  535488 start.go:364] duration metric: took 108.474µs to acquireMachinesLock for "addons-780397"
	I1101 10:48:48.149663  535488 start.go:93] Provisioning new machine with config: &{Name:addons-780397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:48:48.149776  535488 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:48:48.153025  535488 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 10:48:48.153269  535488 start.go:159] libmachine.API.Create for "addons-780397" (driver="docker")
	I1101 10:48:48.153310  535488 client.go:173] LocalClient.Create starting
	I1101 10:48:48.153435  535488 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 10:48:48.953004  535488 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 10:48:49.242063  535488 cli_runner.go:164] Run: docker network inspect addons-780397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:48:49.257080  535488 cli_runner.go:211] docker network inspect addons-780397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:48:49.257172  535488 network_create.go:284] running [docker network inspect addons-780397] to gather additional debugging logs...
	I1101 10:48:49.257194  535488 cli_runner.go:164] Run: docker network inspect addons-780397
	W1101 10:48:49.272277  535488 cli_runner.go:211] docker network inspect addons-780397 returned with exit code 1
	I1101 10:48:49.272309  535488 network_create.go:287] error running [docker network inspect addons-780397]: docker network inspect addons-780397: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-780397 not found
	I1101 10:48:49.272337  535488 network_create.go:289] output of [docker network inspect addons-780397]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-780397 not found
	
	** /stderr **
	I1101 10:48:49.272453  535488 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:48:49.288518  535488 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0fcc0}
	I1101 10:48:49.288554  535488 network_create.go:124] attempt to create docker network addons-780397 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 10:48:49.288609  535488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-780397 addons-780397
	I1101 10:48:49.348075  535488 network_create.go:108] docker network addons-780397 192.168.49.0/24 created
	I1101 10:48:49.348106  535488 kic.go:121] calculated static IP "192.168.49.2" for the "addons-780397" container
	I1101 10:48:49.348189  535488 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:48:49.363299  535488 cli_runner.go:164] Run: docker volume create addons-780397 --label name.minikube.sigs.k8s.io=addons-780397 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:48:49.381328  535488 oci.go:103] Successfully created a docker volume addons-780397
	I1101 10:48:49.381420  535488 cli_runner.go:164] Run: docker run --rm --name addons-780397-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-780397 --entrypoint /usr/bin/test -v addons-780397:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:48:51.598637  535488 cli_runner.go:217] Completed: docker run --rm --name addons-780397-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-780397 --entrypoint /usr/bin/test -v addons-780397:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.217167035s)
	I1101 10:48:51.598679  535488 oci.go:107] Successfully prepared a docker volume addons-780397
	I1101 10:48:51.598709  535488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:48:51.598730  535488 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:48:51.598805  535488 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-780397:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:48:56.030427  535488 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-780397:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.431581104s)
	I1101 10:48:56.030458  535488 kic.go:203] duration metric: took 4.431724376s to extract preloaded images to volume ...
	W1101 10:48:56.030605  535488 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 10:48:56.030761  535488 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:48:56.086038  535488 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-780397 --name addons-780397 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-780397 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-780397 --network addons-780397 --ip 192.168.49.2 --volume addons-780397:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:48:56.381366  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Running}}
	I1101 10:48:56.400346  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:48:56.429618  535488 cli_runner.go:164] Run: docker exec addons-780397 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:48:56.483642  535488 oci.go:144] the created container "addons-780397" has a running status.
	I1101 10:48:56.483671  535488 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa...
	I1101 10:48:56.754426  535488 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:48:56.778268  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:48:56.797073  535488 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:48:56.797092  535488 kic_runner.go:114] Args: [docker exec --privileged addons-780397 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:48:56.857049  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:48:56.883734  535488 machine.go:94] provisionDockerMachine start ...
	I1101 10:48:56.883853  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:48:56.908190  535488 main.go:143] libmachine: Using SSH client type: native
	I1101 10:48:56.908817  535488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1101 10:48:56.908842  535488 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:48:56.909850  535488 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:49:00.123306  535488 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-780397
	
	I1101 10:49:00.123334  535488 ubuntu.go:182] provisioning hostname "addons-780397"
	I1101 10:49:00.123415  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:00.191347  535488 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:00.191669  535488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1101 10:49:00.191681  535488 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-780397 && echo "addons-780397" | sudo tee /etc/hostname
	I1101 10:49:00.446030  535488 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-780397
	
	I1101 10:49:00.446146  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:00.467417  535488 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:00.467748  535488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1101 10:49:00.467773  535488 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-780397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-780397/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-780397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:49:00.617951  535488 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:49:00.617978  535488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 10:49:00.618009  535488 ubuntu.go:190] setting up certificates
	I1101 10:49:00.618019  535488 provision.go:84] configureAuth start
	I1101 10:49:00.618081  535488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-780397
	I1101 10:49:00.635149  535488 provision.go:143] copyHostCerts
	I1101 10:49:00.635235  535488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 10:49:00.635390  535488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 10:49:00.635459  535488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 10:49:00.635519  535488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.addons-780397 san=[127.0.0.1 192.168.49.2 addons-780397 localhost minikube]
	I1101 10:49:02.244484  535488 provision.go:177] copyRemoteCerts
	I1101 10:49:02.244555  535488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:49:02.244597  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.267203  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:02.373321  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:49:02.390286  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 10:49:02.407276  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:49:02.424531  535488 provision.go:87] duration metric: took 1.806487246s to configureAuth
	I1101 10:49:02.424560  535488 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:49:02.424751  535488 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:02.424864  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.440977  535488 main.go:143] libmachine: Using SSH client type: native
	I1101 10:49:02.441278  535488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1101 10:49:02.441293  535488 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:49:02.694668  535488 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:49:02.694695  535488 machine.go:97] duration metric: took 5.810938498s to provisionDockerMachine
	I1101 10:49:02.694706  535488 client.go:176] duration metric: took 14.541378697s to LocalClient.Create
	I1101 10:49:02.694719  535488 start.go:167] duration metric: took 14.541451379s to libmachine.API.Create "addons-780397"
	I1101 10:49:02.694735  535488 start.go:293] postStartSetup for "addons-780397" (driver="docker")
	I1101 10:49:02.694750  535488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:49:02.694828  535488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:49:02.694886  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.713390  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:02.817608  535488 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:49:02.820911  535488 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:49:02.820941  535488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:49:02.820953  535488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 10:49:02.821025  535488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 10:49:02.821068  535488 start.go:296] duration metric: took 126.321528ms for postStartSetup
	I1101 10:49:02.821397  535488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-780397
	I1101 10:49:02.837310  535488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/config.json ...
	I1101 10:49:02.837603  535488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:49:02.837656  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.854875  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:02.954726  535488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:49:02.959195  535488 start.go:128] duration metric: took 14.809402115s to createHost
	I1101 10:49:02.959220  535488 start.go:83] releasing machines lock for "addons-780397", held for 14.809575073s
	I1101 10:49:02.959318  535488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-780397
	I1101 10:49:02.976039  535488 ssh_runner.go:195] Run: cat /version.json
	I1101 10:49:02.976107  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.976352  535488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:49:02.976420  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:02.997455  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:03.005941  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:03.196943  535488 ssh_runner.go:195] Run: systemctl --version
	I1101 10:49:03.203476  535488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:49:03.240522  535488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:49:03.245219  535488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:49:03.245296  535488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:49:03.274750  535488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 10:49:03.274776  535488 start.go:496] detecting cgroup driver to use...
	I1101 10:49:03.274837  535488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:49:03.274914  535488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:49:03.291254  535488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:49:03.304035  535488 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:49:03.304100  535488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:49:03.321926  535488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:49:03.340099  535488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:49:03.448361  535488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:49:03.586379  535488 docker.go:234] disabling docker service ...
	I1101 10:49:03.586444  535488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:49:03.608320  535488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:49:03.621198  535488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:49:03.738506  535488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:49:03.857277  535488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:49:03.870760  535488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:49:03.884936  535488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:49:03.885002  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.894371  535488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:49:03.894453  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.903526  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.912514  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.921531  535488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:49:03.929730  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.938684  535488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.951846  535488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:49:03.960418  535488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:49:03.967915  535488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:49:03.975579  535488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:04.086057  535488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:49:04.216573  535488 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:49:04.216723  535488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:49:04.220524  535488 start.go:564] Will wait 60s for crictl version
	I1101 10:49:04.220597  535488 ssh_runner.go:195] Run: which crictl
	I1101 10:49:04.224209  535488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:49:04.249441  535488 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:49:04.249565  535488 ssh_runner.go:195] Run: crio --version
	I1101 10:49:04.277861  535488 ssh_runner.go:195] Run: crio --version
	I1101 10:49:04.310631  535488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:49:04.313541  535488 cli_runner.go:164] Run: docker network inspect addons-780397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:49:04.329791  535488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 10:49:04.333639  535488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:49:04.343366  535488 kubeadm.go:884] updating cluster {Name:addons-780397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:49:04.343487  535488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:49:04.343546  535488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:49:04.377239  535488 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:49:04.377260  535488 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:49:04.377314  535488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:49:04.403005  535488 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:49:04.403031  535488 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:49:04.403040  535488 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 10:49:04.403128  535488 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-780397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:49:04.403213  535488 ssh_runner.go:195] Run: crio config
	I1101 10:49:04.482972  535488 cni.go:84] Creating CNI manager for ""
	I1101 10:49:04.482998  535488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:49:04.483017  535488 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:49:04.483063  535488 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-780397 NodeName:addons-780397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:49:04.483198  535488 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-780397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:49:04.483273  535488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:49:04.490891  535488 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:49:04.491009  535488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:49:04.498674  535488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 10:49:04.512026  535488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:49:04.525112  535488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1101 10:49:04.537775  535488 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:49:04.541252  535488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:49:04.550929  535488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:04.673791  535488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:49:04.689947  535488 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397 for IP: 192.168.49.2
	I1101 10:49:04.689980  535488 certs.go:195] generating shared ca certs ...
	I1101 10:49:04.690012  535488 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:04.690182  535488 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 10:49:04.855706  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt ...
	I1101 10:49:04.855735  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt: {Name:mkd8cc2887830a159b2b1c088105b8ccf386520b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:04.855964  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key ...
	I1101 10:49:04.855979  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key: {Name:mk207a6fa593d5625b07de77baa039bb8fc57bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:04.856070  535488 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 10:49:05.388339  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt ...
	I1101 10:49:05.388372  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt: {Name:mk0f59d993b941d17205757d41b370114a519a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.388567  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key ...
	I1101 10:49:05.388576  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key: {Name:mk4449e2883a1aab70403a8d895c70ff11b4b1c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.388644  535488 certs.go:257] generating profile certs ...
	I1101 10:49:05.388706  535488 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.key
	I1101 10:49:05.388723  535488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt with IP's: []
	I1101 10:49:05.544497  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt ...
	I1101 10:49:05.544528  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: {Name:mk51ba45dd6c14cf21a89025d4cd908340a0bd64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.544717  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.key ...
	I1101 10:49:05.544730  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.key: {Name:mkef223e034dabd3326eab9daab64983adec8a23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.544825  535488 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key.0601b8c1
	I1101 10:49:05.544850  535488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt.0601b8c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 10:49:05.863565  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt.0601b8c1 ...
	I1101 10:49:05.863596  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt.0601b8c1: {Name:mk9254a5bc443d9f07db240ebfd018a13e8bf5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.863765  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key.0601b8c1 ...
	I1101 10:49:05.863781  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key.0601b8c1: {Name:mkcc27664c24c9be4d28b11b66f6567eb79c4f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:05.863867  535488 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt.0601b8c1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt
	I1101 10:49:05.863954  535488 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key.0601b8c1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key
	I1101 10:49:05.864018  535488 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.key
	I1101 10:49:05.864040  535488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.crt with IP's: []
	I1101 10:49:06.170540  535488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.crt ...
	I1101 10:49:06.170573  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.crt: {Name:mk63530afe97c13fc8ee2daeda202fbe67a9b5b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:06.170747  535488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.key ...
	I1101 10:49:06.170761  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.key: {Name:mk90a2c6ef768d14b23aab641ad8dfde452d56de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:06.170954  535488 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 10:49:06.171004  535488 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:49:06.171042  535488 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:49:06.171071  535488 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 10:49:06.171687  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:49:06.189110  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:49:06.205936  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:49:06.223945  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:49:06.241394  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:49:06.258152  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:49:06.275381  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:49:06.292683  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:49:06.309424  535488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:49:06.325835  535488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:49:06.337890  535488 ssh_runner.go:195] Run: openssl version
	I1101 10:49:06.344626  535488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:49:06.352604  535488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:06.356194  535488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:06.356265  535488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:49:06.396815  535488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:49:06.405274  535488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:49:06.408874  535488 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:49:06.408956  535488 kubeadm.go:401] StartCluster: {Name:addons-780397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-780397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:49:06.409046  535488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:49:06.409117  535488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:49:06.436149  535488 cri.go:89] found id: ""
	I1101 10:49:06.436229  535488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:49:06.444267  535488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:49:06.451771  535488 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:49:06.451888  535488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:49:06.459879  535488 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:49:06.459897  535488 kubeadm.go:158] found existing configuration files:
	
	I1101 10:49:06.459950  535488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:49:06.467480  535488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:49:06.467543  535488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:49:06.474722  535488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:49:06.482252  535488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:49:06.482336  535488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:49:06.489771  535488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:49:06.497210  535488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:49:06.497302  535488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:49:06.504782  535488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:49:06.512667  535488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:49:06.512736  535488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:49:06.520162  535488 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:49:06.585511  535488 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 10:49:06.585790  535488 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 10:49:06.652678  535488 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:49:25.866906  535488 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:49:25.866964  535488 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:49:25.867056  535488 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:49:25.867132  535488 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 10:49:25.867168  535488 kubeadm.go:319] OS: Linux
	I1101 10:49:25.867217  535488 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:49:25.867267  535488 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 10:49:25.867316  535488 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:49:25.867366  535488 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:49:25.867418  535488 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:49:25.867478  535488 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:49:25.867526  535488 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:49:25.867577  535488 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:49:25.867624  535488 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 10:49:25.867698  535488 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:49:25.867796  535488 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:49:25.867888  535488 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:49:25.867952  535488 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:49:25.870898  535488 out.go:252]   - Generating certificates and keys ...
	I1101 10:49:25.871021  535488 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:49:25.871095  535488 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:49:25.871182  535488 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:49:25.871246  535488 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:49:25.871314  535488 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:49:25.871370  535488 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:49:25.871446  535488 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:49:25.871604  535488 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-780397 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 10:49:25.871693  535488 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:49:25.871839  535488 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-780397 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 10:49:25.871932  535488 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:49:25.872051  535488 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:49:25.872103  535488 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:49:25.872164  535488 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:49:25.872217  535488 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:49:25.872294  535488 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:49:25.872382  535488 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:49:25.872465  535488 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:49:25.872551  535488 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:49:25.872688  535488 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:49:25.872772  535488 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:49:25.876154  535488 out.go:252]   - Booting up control plane ...
	I1101 10:49:25.876273  535488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:49:25.876362  535488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:49:25.876436  535488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:49:25.876580  535488 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:49:25.876691  535488 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:49:25.876814  535488 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:49:25.876941  535488 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:49:25.876990  535488 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:49:25.877170  535488 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:49:25.877317  535488 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:49:25.877394  535488 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501375941s
	I1101 10:49:25.877523  535488 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:49:25.877630  535488 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 10:49:25.877754  535488 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:49:25.877867  535488 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:49:25.877979  535488 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.985340677s
	I1101 10:49:25.878056  535488 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.33943431s
	I1101 10:49:25.878160  535488 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501592745s
	I1101 10:49:25.878296  535488 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:49:25.878431  535488 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:49:25.878496  535488 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:49:25.878715  535488 kubeadm.go:319] [mark-control-plane] Marking the node addons-780397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:49:25.878778  535488 kubeadm.go:319] [bootstrap-token] Using token: j1qabl.r7grcx4jd7tbjvaf
	I1101 10:49:25.882602  535488 out.go:252]   - Configuring RBAC rules ...
	I1101 10:49:25.882738  535488 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:49:25.882828  535488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:49:25.882975  535488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:49:25.883107  535488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:49:25.883228  535488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:49:25.883318  535488 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:49:25.883455  535488 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:49:25.883546  535488 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:49:25.883634  535488 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:49:25.883661  535488 kubeadm.go:319] 
	I1101 10:49:25.883763  535488 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:49:25.883788  535488 kubeadm.go:319] 
	I1101 10:49:25.883900  535488 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:49:25.883924  535488 kubeadm.go:319] 
	I1101 10:49:25.883984  535488 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:49:25.884079  535488 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:49:25.884149  535488 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:49:25.884162  535488 kubeadm.go:319] 
	I1101 10:49:25.884226  535488 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:49:25.884235  535488 kubeadm.go:319] 
	I1101 10:49:25.884286  535488 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:49:25.884294  535488 kubeadm.go:319] 
	I1101 10:49:25.884352  535488 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:49:25.884458  535488 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:49:25.884557  535488 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:49:25.884567  535488 kubeadm.go:319] 
	I1101 10:49:25.884659  535488 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:49:25.884753  535488 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:49:25.884763  535488 kubeadm.go:319] 
	I1101 10:49:25.884857  535488 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j1qabl.r7grcx4jd7tbjvaf \
	I1101 10:49:25.884995  535488 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 10:49:25.885021  535488 kubeadm.go:319] 	--control-plane 
	I1101 10:49:25.885029  535488 kubeadm.go:319] 
	I1101 10:49:25.885138  535488 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:49:25.885176  535488 kubeadm.go:319] 
	I1101 10:49:25.885271  535488 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j1qabl.r7grcx4jd7tbjvaf \
	I1101 10:49:25.885397  535488 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 10:49:25.885426  535488 cni.go:84] Creating CNI manager for ""
	I1101 10:49:25.885438  535488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:49:25.890337  535488 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:49:25.893097  535488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:49:25.897215  535488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:49:25.897237  535488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:49:25.910597  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:49:26.199006  535488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:49:26.199161  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:26.199263  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-780397 minikube.k8s.io/updated_at=2025_11_01T10_49_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=addons-780397 minikube.k8s.io/primary=true
	I1101 10:49:26.354666  535488 ops.go:34] apiserver oom_adj: -16
	I1101 10:49:26.354776  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:26.855427  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:27.354949  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:27.855459  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:28.355295  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:28.855766  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:29.355099  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:29.855328  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:30.355676  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:30.855346  535488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:49:30.987167  535488 kubeadm.go:1114] duration metric: took 4.788060575s to wait for elevateKubeSystemPrivileges
	I1101 10:49:30.987193  535488 kubeadm.go:403] duration metric: took 24.578274417s to StartCluster
	I1101 10:49:30.987209  535488 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:30.987316  535488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 10:49:30.987693  535488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:49:30.987889  535488 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:49:30.988056  535488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:49:30.988314  535488 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:30.988342  535488 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 10:49:30.988414  535488 addons.go:70] Setting yakd=true in profile "addons-780397"
	I1101 10:49:30.988427  535488 addons.go:239] Setting addon yakd=true in "addons-780397"
	I1101 10:49:30.988448  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:30.988935  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:30.989430  535488 addons.go:70] Setting metrics-server=true in profile "addons-780397"
	I1101 10:49:30.989446  535488 addons.go:239] Setting addon metrics-server=true in "addons-780397"
	I1101 10:49:30.989468  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:30.989912  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:30.990057  535488 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-780397"
	I1101 10:49:30.990097  535488 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-780397"
	I1101 10:49:30.990126  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:30.990547  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:30.992906  535488 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-780397"
	I1101 10:49:30.993185  535488 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-780397"
	I1101 10:49:30.993233  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:30.993057  535488 addons.go:70] Setting cloud-spanner=true in profile "addons-780397"
	I1101 10:49:30.993066  535488 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-780397"
	I1101 10:49:30.993070  535488 addons.go:70] Setting default-storageclass=true in profile "addons-780397"
	I1101 10:49:30.993074  535488 addons.go:70] Setting gcp-auth=true in profile "addons-780397"
	I1101 10:49:30.993077  535488 addons.go:70] Setting ingress=true in profile "addons-780397"
	I1101 10:49:30.993079  535488 addons.go:70] Setting ingress-dns=true in profile "addons-780397"
	I1101 10:49:30.993082  535488 addons.go:70] Setting inspektor-gadget=true in profile "addons-780397"
	I1101 10:49:30.993113  535488 out.go:179] * Verifying Kubernetes components...
	I1101 10:49:30.993128  535488 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-780397"
	I1101 10:49:30.993141  535488 addons.go:70] Setting registry=true in profile "addons-780397"
	I1101 10:49:30.993144  535488 addons.go:70] Setting registry-creds=true in profile "addons-780397"
	I1101 10:49:30.993147  535488 addons.go:70] Setting storage-provisioner=true in profile "addons-780397"
	I1101 10:49:30.993151  535488 addons.go:70] Setting volumesnapshots=true in profile "addons-780397"
	I1101 10:49:30.993154  535488 addons.go:70] Setting volcano=true in profile "addons-780397"
	I1101 10:49:30.994744  535488 addons.go:239] Setting addon cloud-spanner=true in "addons-780397"
	I1101 10:49:30.995230  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:30.995266  535488 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-780397"
	I1101 10:49:31.002103  535488 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-780397"
	I1101 10:49:31.003013  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.003252  535488 addons.go:239] Setting addon volumesnapshots=true in "addons-780397"
	I1101 10:49:31.010265  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.010875  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002178  535488 mustload.go:66] Loading cluster: addons-780397
	I1101 10:49:31.022377  535488 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:31.022819  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.027917  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.029903  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.003407  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.044032  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002273  535488 addons.go:239] Setting addon ingress-dns=true in "addons-780397"
	I1101 10:49:31.059762  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.063761  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002287  535488 addons.go:239] Setting addon ingress=true in "addons-780397"
	I1101 10:49:31.083271  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.002303  535488 addons.go:239] Setting addon inspektor-gadget=true in "addons-780397"
	I1101 10:49:31.088317  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.088857  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.090768  535488 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 10:49:31.129441  535488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:49:31.002428  535488 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-780397"
	I1101 10:49:31.130105  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002541  535488 addons.go:239] Setting addon registry-creds=true in "addons-780397"
	I1101 10:49:31.142239  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.142951  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002551  535488 addons.go:239] Setting addon registry=true in "addons-780397"
	I1101 10:49:31.178775  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.179354  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.002568  535488 addons.go:239] Setting addon storage-provisioner=true in "addons-780397"
	I1101 10:49:31.179493  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.179953  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.003423  535488 addons.go:239] Setting addon volcano=true in "addons-780397"
	I1101 10:49:31.197855  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.198419  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.211557  535488 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 10:49:31.212162  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.215517  535488 addons.go:239] Setting addon default-storageclass=true in "addons-780397"
	I1101 10:49:31.215558  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.217625  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.240538  535488 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 10:49:31.240663  535488 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 10:49:31.245809  535488 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 10:49:31.245834  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 10:49:31.245900  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.246121  535488 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 10:49:31.246131  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 10:49:31.246177  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.267628  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 10:49:31.267652  535488 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 10:49:31.267715  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.290783  535488 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-780397"
	I1101 10:49:31.290823  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.291230  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:31.310339  535488 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 10:49:31.313244  535488 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 10:49:31.313270  535488 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 10:49:31.313343  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.328988  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 10:49:31.329720  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 10:49:31.333751  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 10:49:31.333775  535488 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 10:49:31.333842  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.303381  535488 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 10:49:31.335862  535488 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 10:49:31.335940  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.303627  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:31.364920  535488 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 10:49:31.366913  535488 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 10:49:31.370087  535488 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 10:49:31.370102  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 10:49:31.370164  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.391567  535488 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 10:49:31.391633  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 10:49:31.391731  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.303982  535488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:49:31.392929  535488 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:49:31.392942  535488 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:49:31.393023  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.401469  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 10:49:31.401559  535488 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	W1101 10:49:31.401965  535488 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 10:49:31.433545  535488 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 10:49:31.433563  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 10:49:31.433630  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.462066  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 10:49:31.472079  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 10:49:31.472973  535488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 10:49:31.484699  535488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 10:49:31.484979  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.494074  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.510423  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 10:49:31.511014  535488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 10:49:31.523533  535488 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 10:49:31.523555  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 10:49:31.523623  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.531714  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 10:49:31.541542  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 10:49:31.545614  535488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 10:49:31.549432  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.550098  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 10:49:31.550114  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 10:49:31.550173  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.604086  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.605290  535488 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:49:31.605345  535488 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 10:49:31.610961  535488 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:49:31.610984  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:49:31.611051  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.615990  535488 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 10:49:31.618783  535488 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 10:49:31.618802  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 10:49:31.618870  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.636248  535488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:49:31.647331  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.648441  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.648955  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.664091  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.664707  535488 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 10:49:31.669232  535488 out.go:179]   - Using image docker.io/busybox:stable
	I1101 10:49:31.674171  535488 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 10:49:31.674195  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 10:49:31.674274  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:31.677675  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.723426  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.730776  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.732839  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.761005  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.765308  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:31.766449  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:32.148809  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 10:49:32.186115  535488 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:32.186139  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 10:49:32.191724  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 10:49:32.227068  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 10:49:32.239701  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 10:49:32.239743  535488 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 10:49:32.276562  535488 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 10:49:32.276589  535488 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 10:49:32.278057  535488 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 10:49:32.278094  535488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 10:49:32.311151  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:32.320692  535488 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 10:49:32.320716  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 10:49:32.325593  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:49:32.336711  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 10:49:32.336749  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 10:49:32.367519  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 10:49:32.387419  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 10:49:32.398795  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 10:49:32.407902  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 10:49:32.407929  535488 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 10:49:32.413231  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 10:49:32.447411  535488 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 10:49:32.447445  535488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 10:49:32.452271  535488 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 10:49:32.452295  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 10:49:32.489317  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:49:32.495796  535488 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 10:49:32.495831  535488 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 10:49:32.518895  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 10:49:32.518922  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 10:49:32.596350  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 10:49:32.596376  535488 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 10:49:32.614660  535488 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 10:49:32.614723  535488 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 10:49:32.624212  535488 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 10:49:32.624283  535488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 10:49:32.678211  535488 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.285874651s)
	I1101 10:49:32.678282  535488 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 10:49:32.679241  535488 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.042970828s)
	I1101 10:49:32.679953  535488 node_ready.go:35] waiting up to 6m0s for node "addons-780397" to be "Ready" ...
	I1101 10:49:32.692638  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 10:49:32.720683  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 10:49:32.720754  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 10:49:32.783442  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 10:49:32.789513  535488 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 10:49:32.789578  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 10:49:32.996380  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 10:49:32.996455  535488 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 10:49:33.039502  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 10:49:33.039580  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 10:49:33.069759  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 10:49:33.183876  535488 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-780397" context rescaled to 1 replicas
	I1101 10:49:33.279900  535488 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 10:49:33.279971  535488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 10:49:33.288251  535488 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 10:49:33.288326  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 10:49:33.541502  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 10:49:33.541588  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 10:49:33.580971  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 10:49:33.722911  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 10:49:33.722989  535488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 10:49:33.885611  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.736765797s)
	I1101 10:49:33.885759  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.694002347s)
	I1101 10:49:33.929682  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 10:49:33.929780  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 10:49:34.194496  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 10:49:34.194563  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 10:49:34.355343  535488 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 10:49:34.355409  535488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 10:49:34.501269  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1101 10:49:34.755341  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:34.891137  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.664031865s)
	I1101 10:49:36.387592  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.076405603s)
	W1101 10:49:36.387628  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:36.387652  535488 retry.go:31] will retry after 225.84602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:36.387681  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.06205875s)
	I1101 10:49:36.387907  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.020363722s)
	I1101 10:49:36.388034  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.000594004s)
	I1101 10:49:36.388086  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.989260899s)
	W1101 10:49:36.428085  535488 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 10:49:36.613992  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 10:49:37.231008  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:37.554101  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.064745747s)
	I1101 10:49:37.554200  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.861471734s)
	I1101 10:49:37.554249  535488 addons.go:480] Verifying addon registry=true in "addons-780397"
	I1101 10:49:37.554319  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.770804172s)
	I1101 10:49:37.554340  535488 addons.go:480] Verifying addon metrics-server=true in "addons-780397"
	I1101 10:49:37.554258  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.14099713s)
	I1101 10:49:37.554559  535488 addons.go:480] Verifying addon ingress=true in "addons-780397"
	I1101 10:49:37.554581  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.484734129s)
	I1101 10:49:37.554942  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.973883051s)
	W1101 10:49:37.554970  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 10:49:37.554986  535488 retry.go:31] will retry after 137.300793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 10:49:37.557497  535488 out.go:179] * Verifying ingress addon...
	I1101 10:49:37.557496  535488 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-780397 service yakd-dashboard -n yakd-dashboard
	
	I1101 10:49:37.557615  535488 out.go:179] * Verifying registry addon...
	I1101 10:49:37.561410  535488 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 10:49:37.563270  535488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 10:49:37.589897  535488 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 10:49:37.589918  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:37.602381  535488 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 10:49:37.602400  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:37.693188  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 10:49:38.053529  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.552171451s)
	I1101 10:49:38.053565  535488 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-780397"
	I1101 10:49:38.056776  535488 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 10:49:38.060409  535488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 10:49:38.075263  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:38.075499  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:38.075568  535488 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 10:49:38.075580  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:38.109380  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.495345918s)
	W1101 10:49:38.109420  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:38.109440  535488 retry.go:31] will retry after 426.854895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:38.537139  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:38.569404  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:38.572206  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:38.670190  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:38.973861  535488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 10:49:38.974014  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:38.993971  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:49:39.066745  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:39.067993  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:39.068101  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:39.120082  535488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 10:49:39.137075  535488 addons.go:239] Setting addon gcp-auth=true in "addons-780397"
	I1101 10:49:39.137137  535488 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:49:39.137640  535488 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:49:39.164048  535488 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 10:49:39.164135  535488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:49:39.188419  535488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	W1101 10:49:39.446014  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:39.446109  535488 retry.go:31] will retry after 558.688082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:39.449824  535488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 10:49:39.452781  535488 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 10:49:39.455613  535488 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 10:49:39.455645  535488 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 10:49:39.469768  535488 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 10:49:39.469794  535488 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 10:49:39.483572  535488 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 10:49:39.483597  535488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 10:49:39.498861  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 10:49:39.572768  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:39.573316  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:39.573674  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 10:49:39.683890  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:39.995661  535488 addons.go:480] Verifying addon gcp-auth=true in "addons-780397"
	I1101 10:49:39.998589  535488 out.go:179] * Verifying gcp-auth addon...
	I1101 10:49:40.002449  535488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 10:49:40.005265  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:40.015138  535488 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 10:49:40.015169  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:40.112545  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:40.113481  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:40.113920  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:40.505938  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:40.567104  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:40.567655  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:40.569460  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:40.858771  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:40.858847  535488 retry.go:31] will retry after 654.780752ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:41.006301  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:41.063848  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:41.065145  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:41.066334  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:41.505511  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:41.514620  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:41.566420  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:41.568590  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:41.570058  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:42.006717  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:42.068023  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:42.069194  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:42.069716  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 10:49:42.184409  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	W1101 10:49:42.347143  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:42.347175  535488 retry.go:31] will retry after 1.753587663s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:42.508327  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:42.565646  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:42.566268  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:42.567043  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:43.006597  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:43.066384  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:43.066738  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:43.067204  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:43.505970  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:43.564990  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:43.566035  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:43.566168  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:44.007094  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:44.065497  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:44.065765  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:44.067689  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:44.101781  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:44.507058  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:44.609611  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:44.610330  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:44.610401  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:44.682821  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	W1101 10:49:44.929377  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:44.929463  535488 retry.go:31] will retry after 1.677613923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:45.008159  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:45.071006  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:45.071154  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:45.072907  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:45.506091  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:45.566171  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:45.566313  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:45.566367  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:46.009994  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:46.064860  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:46.066413  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:46.067091  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:46.506770  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:46.565613  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:46.566560  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:46.567227  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:46.607294  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 10:49:46.683829  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:47.006715  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:47.071091  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:47.071488  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:47.072214  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:47.423444  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:47.423488  535488 retry.go:31] will retry after 2.803123831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:47.505286  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:47.563947  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:47.564917  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:47.565855  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:48.008180  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:48.064511  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:48.066255  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:48.067442  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:48.506237  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:48.565446  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:48.565613  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:48.566638  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:49.006033  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:49.063839  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:49.065321  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:49.066240  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:49.183205  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:49.505352  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:49.564561  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:49.565937  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:49.566116  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:50.012508  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:50.065053  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:50.065427  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:50.066400  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:50.227762  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:50.506047  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:50.565822  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:50.565832  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:50.567960  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:51.026080  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:51.072508  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:51.072701  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:51.072897  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 10:49:51.074358  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:51.074391  535488 retry.go:31] will retry after 5.648000345s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:49:51.183255  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:51.505247  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:51.566807  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:51.567202  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:51.567277  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:52.011377  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:52.065417  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:52.065501  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:52.066467  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:52.506271  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:52.564281  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:52.565192  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:52.565900  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:53.011543  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:53.064630  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:53.065678  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:53.066846  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:53.183588  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:53.505564  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:53.564382  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:53.564711  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:53.567092  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:54.008054  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:54.064677  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:54.065851  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:54.066608  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:54.505885  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:54.606589  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:54.606719  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:54.606976  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:55.008237  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:55.064639  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:55.066870  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:55.067313  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:55.183788  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:55.506254  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:55.564380  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:55.564524  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:55.566506  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:56.007591  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:56.064140  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:56.064263  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:56.066557  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:56.506990  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:56.563966  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:56.564942  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:56.566248  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:56.722920  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:49:57.005669  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:57.066127  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:57.066254  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:57.067851  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:57.505575  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 10:49:57.535819  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:57.535869  535488 retry.go:31] will retry after 6.925190166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:49:57.563550  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:57.565824  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:57.566398  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:57.683094  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:49:58.007969  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:58.063534  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:58.066224  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:58.066759  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:58.505898  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:58.564550  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:58.564713  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:58.566197  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:59.007784  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:59.063877  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:59.065797  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:59.066404  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:49:59.505335  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:49:59.564152  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:49:59.564481  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:49:59.566266  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:49:59.683241  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:00.024548  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:00.103510  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:00.109613  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:00.110838  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:00.507625  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:00.569743  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:00.570953  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:00.571122  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:01.011683  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:01.063989  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:01.065620  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:01.066829  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:01.505623  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:01.565604  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:01.565988  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:01.567299  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:01.683303  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:02.006326  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:02.066038  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:02.066252  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:02.067312  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:02.506380  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:02.564497  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:02.564621  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:02.566386  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:03.006925  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:03.066318  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:03.066482  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:03.066548  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:03.505768  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:03.563686  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:03.564689  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:03.566405  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:04.012142  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:04.063948  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:04.065453  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:04.065905  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:04.183859  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:04.462024  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:50:04.505521  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:04.568151  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:04.568579  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:04.568642  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:05.006724  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:05.066712  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:05.067277  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:05.072847  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:05.292260  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:05.292295  535488 retry.go:31] will retry after 6.813572612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:05.505597  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:05.564098  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:05.565552  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:05.566960  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:06.009783  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:06.065519  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:06.065542  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:06.066965  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:06.506705  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:06.563922  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:06.565897  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:06.566812  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:06.683622  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:07.006702  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:07.063697  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:07.065338  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:07.066259  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:07.505606  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:07.566518  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:07.567008  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:07.567124  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:08.009248  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:08.065039  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:08.065236  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:08.066735  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:08.505996  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:08.564853  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:08.565501  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:08.566693  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:09.006341  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:09.065922  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:09.068623  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:09.069291  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 10:50:09.183144  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:09.505544  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:09.563691  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:09.565492  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:09.566380  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:10.008360  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:10.064046  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:10.065464  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:10.066555  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:10.506212  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:10.564705  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:10.564953  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:10.565826  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:11.007774  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:11.063731  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:11.065330  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:11.066147  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:11.505256  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:11.565938  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:11.566034  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:11.567398  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:11.683247  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:12.008567  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:12.063903  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:12.065389  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:12.066381  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:12.106633  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:50:12.505865  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:12.564202  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:12.566613  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:12.567197  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 10:50:12.901670  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:12.901723  535488 retry.go:31] will retry after 10.479824052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:13.006615  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:13.065793  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:13.067214  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:13.067942  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:13.506080  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:13.566449  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:13.568073  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:13.568271  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 10:50:13.683471  535488 node_ready.go:57] node "addons-780397" has "Ready":"False" status (will retry)
	I1101 10:50:14.017562  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:14.114385  535488 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 10:50:14.114412  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:14.114839  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:14.115171  535488 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 10:50:14.115188  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:14.240232  535488 node_ready.go:49] node "addons-780397" is "Ready"
	I1101 10:50:14.240264  535488 node_ready.go:38] duration metric: took 41.560262327s for node "addons-780397" to be "Ready" ...
	I1101 10:50:14.240279  535488 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:50:14.240343  535488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:50:14.267412  535488 api_server.go:72] duration metric: took 43.279495028s to wait for apiserver process to appear ...
	I1101 10:50:14.267493  535488 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:50:14.267528  535488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 10:50:14.288425  535488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 10:50:14.296203  535488 api_server.go:141] control plane version: v1.34.1
	I1101 10:50:14.296284  535488 api_server.go:131] duration metric: took 28.769527ms to wait for apiserver health ...
	I1101 10:50:14.296308  535488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:50:14.397421  535488 system_pods.go:59] 19 kube-system pods found
	I1101 10:50:14.397524  535488 system_pods.go:61] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:14.397549  535488 system_pods.go:61] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending
	I1101 10:50:14.397586  535488 system_pods.go:61] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending
	I1101 10:50:14.397617  535488 system_pods.go:61] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending
	I1101 10:50:14.397640  535488 system_pods.go:61] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:14.397679  535488 system_pods.go:61] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:14.397778  535488 system_pods.go:61] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:14.397804  535488 system_pods.go:61] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:14.397832  535488 system_pods.go:61] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending
	I1101 10:50:14.397875  535488 system_pods.go:61] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:14.397897  535488 system_pods.go:61] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:14.397939  535488 system_pods.go:61] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:14.397967  535488 system_pods.go:61] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending
	I1101 10:50:14.397989  535488 system_pods.go:61] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending
	I1101 10:50:14.398028  535488 system_pods.go:61] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:14.398057  535488 system_pods.go:61] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:14.398082  535488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending
	I1101 10:50:14.398119  535488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:14.398152  535488 system_pods.go:61] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:14.398204  535488 system_pods.go:74] duration metric: took 101.875517ms to wait for pod list to return data ...
	I1101 10:50:14.398232  535488 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:50:14.431073  535488 default_sa.go:45] found service account: "default"
	I1101 10:50:14.431149  535488 default_sa.go:55] duration metric: took 32.894698ms for default service account to be created ...
	I1101 10:50:14.431173  535488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:50:14.476660  535488 system_pods.go:86] 19 kube-system pods found
	I1101 10:50:14.476743  535488 system_pods.go:89] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:14.476766  535488 system_pods.go:89] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending
	I1101 10:50:14.476789  535488 system_pods.go:89] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending
	I1101 10:50:14.476829  535488 system_pods.go:89] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending
	I1101 10:50:14.476848  535488 system_pods.go:89] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:14.476895  535488 system_pods.go:89] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:14.476919  535488 system_pods.go:89] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:14.476942  535488 system_pods.go:89] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:14.476978  535488 system_pods.go:89] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending
	I1101 10:50:14.477002  535488 system_pods.go:89] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:14.477022  535488 system_pods.go:89] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:14.477061  535488 system_pods.go:89] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:14.477088  535488 system_pods.go:89] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending
	I1101 10:50:14.477109  535488 system_pods.go:89] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending
	I1101 10:50:14.477153  535488 system_pods.go:89] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:14.477182  535488 system_pods.go:89] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:14.477205  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending
	I1101 10:50:14.477245  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:14.477272  535488 system_pods.go:89] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:14.477321  535488 retry.go:31] will retry after 208.676237ms: missing components: kube-dns
	I1101 10:50:14.511785  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:14.578400  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:14.579890  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:14.581070  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:14.704291  535488 system_pods.go:86] 19 kube-system pods found
	I1101 10:50:14.704374  535488 system_pods.go:89] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:14.704401  535488 system_pods.go:89] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 10:50:14.704446  535488 system_pods.go:89] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 10:50:14.704472  535488 system_pods.go:89] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 10:50:14.704494  535488 system_pods.go:89] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:14.704534  535488 system_pods.go:89] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:14.704560  535488 system_pods.go:89] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:14.704582  535488 system_pods.go:89] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:14.704623  535488 system_pods.go:89] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending
	I1101 10:50:14.704650  535488 system_pods.go:89] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:14.704671  535488 system_pods.go:89] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:14.704714  535488 system_pods.go:89] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:14.704741  535488 system_pods.go:89] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending
	I1101 10:50:14.704766  535488 system_pods.go:89] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 10:50:14.704804  535488 system_pods.go:89] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:14.704833  535488 system_pods.go:89] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:14.704882  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:14.704921  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:14.704957  535488 system_pods.go:89] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:14.704995  535488 retry.go:31] will retry after 379.442354ms: missing components: kube-dns
	I1101 10:50:15.032009  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:15.138380  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:15.138633  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:15.138755  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:15.139456  535488 system_pods.go:86] 19 kube-system pods found
	I1101 10:50:15.139520  535488 system_pods.go:89] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:15.139547  535488 system_pods.go:89] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 10:50:15.139587  535488 system_pods.go:89] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 10:50:15.139614  535488 system_pods.go:89] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 10:50:15.139637  535488 system_pods.go:89] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:15.139675  535488 system_pods.go:89] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:15.139702  535488 system_pods.go:89] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:15.139723  535488 system_pods.go:89] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:15.139764  535488 system_pods.go:89] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 10:50:15.139789  535488 system_pods.go:89] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:15.139814  535488 system_pods.go:89] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:15.139851  535488 system_pods.go:89] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:15.139886  535488 system_pods.go:89] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 10:50:15.139927  535488 system_pods.go:89] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 10:50:15.139953  535488 system_pods.go:89] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:15.139976  535488 system_pods.go:89] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:15.140013  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:15.140042  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:15.140067  535488 system_pods.go:89] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:50:15.140112  535488 retry.go:31] will retry after 363.623427ms: missing components: kube-dns
	I1101 10:50:15.519911  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:15.529973  535488 system_pods.go:86] 19 kube-system pods found
	I1101 10:50:15.530049  535488 system_pods.go:89] "coredns-66bc5c9577-k9m58" [af60019c-e999-41a9-bc99-b4d3a4eee6a4] Running
	I1101 10:50:15.530078  535488 system_pods.go:89] "csi-hostpath-attacher-0" [9951239c-6147-44f3-a01c-e7160bd3a58e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 10:50:15.530102  535488 system_pods.go:89] "csi-hostpath-resizer-0" [4d78b84b-9eb5-4111-9706-83eefacec626] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 10:50:15.530147  535488 system_pods.go:89] "csi-hostpathplugin-rcv72" [d1e6896a-821e-430c-a2cf-83927cd93b51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 10:50:15.530166  535488 system_pods.go:89] "etcd-addons-780397" [67625d02-c8ed-445f-95ea-46e646f470af] Running
	I1101 10:50:15.530187  535488 system_pods.go:89] "kindnet-lvd2k" [6b973f7f-aed0-4f48-bc11-e081ea2f9c96] Running
	I1101 10:50:15.530219  535488 system_pods.go:89] "kube-apiserver-addons-780397" [2c3c2cfa-d84f-4bb1-8976-1bd53d37b761] Running
	I1101 10:50:15.530244  535488 system_pods.go:89] "kube-controller-manager-addons-780397" [21b2e7be-ec34-4ac2-aa00-a59c295a9974] Running
	I1101 10:50:15.530269  535488 system_pods.go:89] "kube-ingress-dns-minikube" [d43f6163-273a-4b4e-877a-4839d12d05d8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 10:50:15.530290  535488 system_pods.go:89] "kube-proxy-x5kx4" [44aa584d-d5e0-4dd8-8f6d-ddd338f61a7b] Running
	I1101 10:50:15.530323  535488 system_pods.go:89] "kube-scheduler-addons-780397" [9a41542f-d594-41ba-9237-54d48bb3f435] Running
	I1101 10:50:15.530352  535488 system_pods.go:89] "metrics-server-85b7d694d7-lzfmm" [03c133a5-5961-48df-b0c3-63a3d0cf4d1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 10:50:15.530376  535488 system_pods.go:89] "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 10:50:15.530449  535488 system_pods.go:89] "registry-6b586f9694-px94l" [1d2b6d70-7c67-489a-a7da-339c72d285f7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 10:50:15.530478  535488 system_pods.go:89] "registry-creds-764b6fb674-dlcvd" [c655c9a8-60bd-4c14-8ad8-6be6773d91c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 10:50:15.530502  535488 system_pods.go:89] "registry-proxy-w5qfc" [98d8539b-da5f-43bb-a9c1-af73897ea5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 10:50:15.530524  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4wv2x" [624ad9b9-faca-4923-9159-a9a68a2e6e23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:15.530559  535488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k9qvr" [9d15f429-0414-4d9f-9bb6-4ecd2d4170da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 10:50:15.530583  535488 system_pods.go:89] "storage-provisioner" [423ccf2a-6388-4494-b91d-9079812f4d3f] Running
	I1101 10:50:15.530608  535488 system_pods.go:126] duration metric: took 1.09941479s to wait for k8s-apps to be running ...
	I1101 10:50:15.530630  535488 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:50:15.530720  535488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:50:15.546635  535488 system_svc.go:56] duration metric: took 15.994152ms WaitForService to wait for kubelet
	I1101 10:50:15.546667  535488 kubeadm.go:587] duration metric: took 44.558756225s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:50:15.546687  535488 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:50:15.550156  535488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:50:15.550205  535488 node_conditions.go:123] node cpu capacity is 2
	I1101 10:50:15.550217  535488 node_conditions.go:105] duration metric: took 3.524236ms to run NodePressure ...
	I1101 10:50:15.550229  535488 start.go:242] waiting for startup goroutines ...
	I1101 10:50:15.618269  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:15.622677  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:15.623738  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:16.007652  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:16.065649  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:16.066347  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:16.068685  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:16.506495  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:16.608405  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:16.608921  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:16.609357  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:17.007797  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:17.069546  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:17.070084  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:17.070650  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:17.515409  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:17.617979  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:17.618349  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:17.618461  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:18.009164  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:18.071651  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:18.071921  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:18.073420  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:18.508085  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:18.571834  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:18.572213  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:18.572295  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:19.008137  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:19.065004  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:19.065166  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:19.066943  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:19.511507  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:19.611780  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:19.612211  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:19.612564  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:20.014841  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:20.067667  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:20.067879  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:20.067964  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:20.506710  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:20.608608  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:20.608844  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:20.609833  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:21.007419  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:21.067563  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:21.067989  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:21.069864  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:21.506221  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:21.567307  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:21.568597  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:21.571366  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:22.006960  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:22.068027  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:22.068543  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:22.071337  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:22.506255  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:22.566957  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:22.567524  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:22.568943  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:23.007476  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:23.064690  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:23.067434  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:23.067846  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:23.382200  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:50:23.506019  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:23.566896  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:23.567060  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:23.567524  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:24.006403  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:24.066550  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:24.066909  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:24.067120  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:24.466632  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.084393942s)
	W1101 10:50:24.466666  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:24.466683  535488 retry.go:31] will retry after 18.741980911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:24.505763  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:24.568139  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:24.568297  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:24.568790  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:25.007136  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:25.068356  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:25.073243  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:25.073340  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:25.505651  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:25.565610  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:25.567310  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:25.567534  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:26.006734  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:26.069291  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:26.069499  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:26.070053  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:26.505447  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:26.568493  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:26.569413  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:26.570139  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:27.006954  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:27.067543  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:27.067764  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:27.068126  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:27.505756  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:27.563979  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:27.567595  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:27.567844  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:28.006829  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:28.064837  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:28.067641  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:28.067896  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:28.506028  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:28.565547  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:28.565808  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:28.567926  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:29.008253  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:29.110160  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:29.110417  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:29.110553  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:29.506243  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:29.566080  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:29.567135  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:29.568506  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:30.027441  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:30.066500  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:30.066882  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:30.075539  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:30.506002  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:30.566578  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:30.567145  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:30.567991  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:31.007450  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:31.068247  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:31.068420  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:31.071173  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:31.506164  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:31.567483  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:31.568460  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:31.568829  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:32.007089  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:32.066601  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:32.066814  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:32.069043  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:32.507564  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:32.565838  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:32.566099  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:32.567152  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:33.007504  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:33.068365  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:33.069024  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:33.072498  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:33.506823  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:33.564843  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:33.565418  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:33.567320  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:34.012710  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:34.068193  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:34.068312  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:34.068907  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:34.506593  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:34.608006  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:34.608650  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:34.608831  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:35.010592  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:35.069556  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:35.069768  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:35.070095  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:35.506765  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:35.566116  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:35.566315  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:35.567431  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:36.006799  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:36.065088  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:36.066075  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:36.067676  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:36.505752  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:36.563919  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:36.565792  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:36.566911  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:37.026769  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:37.067144  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:37.067665  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:37.069249  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:37.506892  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:37.567667  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:37.568091  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:37.571111  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:38.008281  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:38.068294  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:38.068833  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:38.070987  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:38.506639  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:38.567289  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:38.567837  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:38.569745  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:39.007463  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:39.065127  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:39.066333  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:39.068847  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:39.509952  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:39.611961  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:39.612090  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:39.612263  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:40.019778  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:40.067284  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:40.068004  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:40.068214  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:40.505990  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:40.564776  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:40.567429  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:40.567585  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:41.006918  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:41.067377  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:41.070819  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:41.071514  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:41.506323  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:41.564213  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:41.566516  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:41.566547  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:42.007176  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:42.065378  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:42.065636  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:42.067887  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:42.506964  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:42.564048  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:42.567254  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:42.567556  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:43.008335  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:43.107952  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:43.108521  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:43.108744  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:43.208886  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:50:43.509622  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:43.566811  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:43.567203  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:43.571415  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:44.006136  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:44.065598  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:44.065847  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:44.068702  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:44.224766  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.015833632s)
	W1101 10:50:44.224805  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:44.224824  535488 retry.go:31] will retry after 25.336806971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:50:44.506577  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:44.567684  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:44.567898  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:44.569025  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:45.030267  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:45.128460  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:45.128947  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:45.129436  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:45.510888  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:45.567982  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:45.568253  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:45.571410  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:46.007698  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:46.066485  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:46.066935  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:46.067628  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:46.505589  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:46.564810  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:46.566893  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:46.569329  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:47.006569  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:47.067801  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:47.068248  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:47.069481  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:47.506397  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:47.565638  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:47.567632  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:47.569501  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:48.007045  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:48.067751  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:48.068450  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:48.070247  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:48.505608  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:48.567574  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:48.567894  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:48.567894  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:49.008426  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:49.066111  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:49.066494  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:49.066575  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:49.510756  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:49.610487  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:49.610714  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:49.611588  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:50.015544  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:50.116341  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:50.116465  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:50.117494  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:50.505539  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:50.566846  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:50.567125  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:50.567437  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:51.008492  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:51.064055  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:51.067579  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:51.067760  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:51.506808  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:51.609116  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:51.609482  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:51.609919  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:52.007370  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:52.064552  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:52.066487  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:52.067387  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:52.505286  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:52.566174  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:52.566320  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:52.567897  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:53.006722  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:53.066117  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:53.067504  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:53.068226  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:53.506330  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:53.568163  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:53.568322  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:53.568954  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:54.007411  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:54.067009  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:54.067267  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:54.071107  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:54.506064  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:54.566130  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:54.566202  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:54.567818  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 10:50:55.015526  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:55.115467  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:55.115884  535488 kapi.go:107] duration metric: took 1m17.552614663s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 10:50:55.115943  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:55.506546  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:55.567197  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:55.567607  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:56.007181  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:56.066314  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:56.066738  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:56.506065  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:56.565581  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:56.565869  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:57.006853  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:57.066235  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:57.066671  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:57.506171  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:57.569306  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:57.569452  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:58.010310  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:58.067260  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:58.067814  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:58.506990  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:58.567099  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:58.567764  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:59.006872  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:59.068099  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:59.068750  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:50:59.506194  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:50:59.567356  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:50:59.568041  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:00.077685  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:00.164575  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:00.178615  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:00.506081  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:00.565564  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:00.565739  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:01.010792  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:01.066817  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:01.067118  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:01.505470  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:01.566079  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:01.566620  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:02.011547  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:02.066072  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:02.067199  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:02.506113  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 10:51:02.570177  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:02.570621  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:03.007191  535488 kapi.go:107] duration metric: took 1m23.00474222s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 10:51:03.010437  535488 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-780397 cluster.
	I1101 10:51:03.013455  535488 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 10:51:03.016416  535488 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 10:51:03.066861  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:03.069819  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:03.566451  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:03.566907  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:04.064978  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:04.066885  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:04.565745  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:04.565919  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:05.072785  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:05.072991  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:05.566084  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:05.566287  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:06.066023  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:06.066180  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:06.565002  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:06.565164  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:07.064946  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:07.065798  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:07.572669  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:07.572854  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:08.069649  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:08.069800  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:08.564721  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:08.565391  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:09.066497  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:09.067133  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:09.561928  535488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 10:51:09.565072  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:09.565361  535488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 10:51:10.064482  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:10.064860  535488 kapi.go:107] duration metric: took 1m32.503450368s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 10:51:10.572201  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:10.957682  535488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.395720352s)
	W1101 10:51:10.957771  535488 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:51:10.957850  535488 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 10:51:11.065233  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:11.563907  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:12.064021  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:12.564703  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:13.064488  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:13.564660  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:14.064668  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:14.564896  535488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 10:51:15.065108  535488 kapi.go:107] duration metric: took 1m37.004699818s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 10:51:15.068470  535488 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, ingress-dns, registry-creds, default-storageclass, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1101 10:51:15.071496  535488 addons.go:515] duration metric: took 1m44.083129454s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner ingress-dns registry-creds default-storageclass storage-provisioner metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1101 10:51:15.071557  535488 start.go:247] waiting for cluster config update ...
	I1101 10:51:15.071578  535488 start.go:256] writing updated cluster config ...
	I1101 10:51:15.071923  535488 ssh_runner.go:195] Run: rm -f paused
	I1101 10:51:15.076563  535488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:51:15.082364  535488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k9m58" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.096109  535488 pod_ready.go:94] pod "coredns-66bc5c9577-k9m58" is "Ready"
	I1101 10:51:15.096192  535488 pod_ready.go:86] duration metric: took 13.749482ms for pod "coredns-66bc5c9577-k9m58" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.099587  535488 pod_ready.go:83] waiting for pod "etcd-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.105050  535488 pod_ready.go:94] pod "etcd-addons-780397" is "Ready"
	I1101 10:51:15.105133  535488 pod_ready.go:86] duration metric: took 5.462569ms for pod "etcd-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.108370  535488 pod_ready.go:83] waiting for pod "kube-apiserver-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.113811  535488 pod_ready.go:94] pod "kube-apiserver-addons-780397" is "Ready"
	I1101 10:51:15.113889  535488 pod_ready.go:86] duration metric: took 5.494536ms for pod "kube-apiserver-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.117622  535488 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.481402  535488 pod_ready.go:94] pod "kube-controller-manager-addons-780397" is "Ready"
	I1101 10:51:15.481431  535488 pod_ready.go:86] duration metric: took 363.685014ms for pod "kube-controller-manager-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:15.683036  535488 pod_ready.go:83] waiting for pod "kube-proxy-x5kx4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:16.082835  535488 pod_ready.go:94] pod "kube-proxy-x5kx4" is "Ready"
	I1101 10:51:16.082874  535488 pod_ready.go:86] duration metric: took 399.80523ms for pod "kube-proxy-x5kx4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:16.281472  535488 pod_ready.go:83] waiting for pod "kube-scheduler-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:16.681143  535488 pod_ready.go:94] pod "kube-scheduler-addons-780397" is "Ready"
	I1101 10:51:16.681171  535488 pod_ready.go:86] duration metric: took 399.673207ms for pod "kube-scheduler-addons-780397" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:51:16.681186  535488 pod_ready.go:40] duration metric: took 1.604589473s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:51:16.734645  535488 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:51:16.738749  535488 out.go:179] * Done! kubectl is now configured to use "addons-780397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:51:46 addons-780397 crio[830]: time="2025-11-01T10:51:46.350545818Z" level=info msg="Starting container: 852069938efd619825d71d06723ba8f61532192a1ae43787a20aaab956e0889c" id=c70b2d41-de61-47f1-9fd8-a046e43d07a2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:51:46 addons-780397 crio[830]: time="2025-11-01T10:51:46.353340336Z" level=info msg="Started container" PID=5451 containerID=852069938efd619825d71d06723ba8f61532192a1ae43787a20aaab956e0889c description=default/test-local-path/busybox id=c70b2d41-de61-47f1-9fd8-a046e43d07a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b950e3444e3bf0c29a7724afeb0643cb656fd032469b86c11c599268d429df7
	Nov 01 10:51:48 addons-780397 crio[830]: time="2025-11-01T10:51:48.084885709Z" level=info msg="Stopping pod sandbox: 5b950e3444e3bf0c29a7724afeb0643cb656fd032469b86c11c599268d429df7" id=eafa6387-8eee-48bd-a3db-6add65f8b4aa name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:51:48 addons-780397 crio[830]: time="2025-11-01T10:51:48.085220639Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:5b950e3444e3bf0c29a7724afeb0643cb656fd032469b86c11c599268d429df7 UID:8b495772-a76f-41a9-9013-400a8c6dbc43 NetNS:/var/run/netns/13a00067-8088-403f-a1e1-5d64035cddc3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cd48}] Aliases:map[]}"
	Nov 01 10:51:48 addons-780397 crio[830]: time="2025-11-01T10:51:48.08536707Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:51:48 addons-780397 crio[830]: time="2025-11-01T10:51:48.111598198Z" level=info msg="Stopped pod sandbox: 5b950e3444e3bf0c29a7724afeb0643cb656fd032469b86c11c599268d429df7" id=eafa6387-8eee-48bd-a3db-6add65f8b4aa name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.324939756Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45/POD" id=b6bdeb7d-52ed-486c-82a6-957e264c004e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.325060964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.338291012Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45 Namespace:local-path-storage ID:c7ef83ea3c1e18fc461b23f9d08ff6380b69dc1aadf29d6d4b32a2b58d792fe8 UID:1f4a5526-fd0c-469a-96d8-b64e23fd03f2 NetNS:/var/run/netns/0a59c064-40fa-4eb6-967e-6f825cc03af9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400143ca78}] Aliases:map[]}"
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.338364851Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45 to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.363002799Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45 Namespace:local-path-storage ID:c7ef83ea3c1e18fc461b23f9d08ff6380b69dc1aadf29d6d4b32a2b58d792fe8 UID:1f4a5526-fd0c-469a-96d8-b64e23fd03f2 NetNS:/var/run/netns/0a59c064-40fa-4eb6-967e-6f825cc03af9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400143ca78}] Aliases:map[]}"
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.363168053Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45 for CNI network kindnet (type=ptp)"
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.367987461Z" level=info msg="Ran pod sandbox c7ef83ea3c1e18fc461b23f9d08ff6380b69dc1aadf29d6d4b32a2b58d792fe8 with infra container: local-path-storage/helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45/POD" id=b6bdeb7d-52ed-486c-82a6-957e264c004e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.36917147Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=1e536670-ce4f-46cc-9d2c-e2e219e997e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.375225161Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=93108444-e8a5-447e-acee-772b9812e5e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.384415348Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45/helper-pod" id=b060767a-87f2-4086-9c64-0ae97a8e27a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.384543317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.391265604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.391910807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.41748897Z" level=info msg="Created container 58a4ca27369d5ac16b34925e5af8545dc45676a1d9a1669ed0e9c162d67b3d47: local-path-storage/helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45/helper-pod" id=b060767a-87f2-4086-9c64-0ae97a8e27a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.418766084Z" level=info msg="Starting container: 58a4ca27369d5ac16b34925e5af8545dc45676a1d9a1669ed0e9c162d67b3d47" id=c38da849-4309-4c56-a2e3-6126605cc4b6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:51:49 addons-780397 crio[830]: time="2025-11-01T10:51:49.421522086Z" level=info msg="Started container" PID=5527 containerID=58a4ca27369d5ac16b34925e5af8545dc45676a1d9a1669ed0e9c162d67b3d47 description=local-path-storage/helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45/helper-pod id=c38da849-4309-4c56-a2e3-6126605cc4b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7ef83ea3c1e18fc461b23f9d08ff6380b69dc1aadf29d6d4b32a2b58d792fe8
	Nov 01 10:51:51 addons-780397 crio[830]: time="2025-11-01T10:51:51.105558124Z" level=info msg="Stopping pod sandbox: c7ef83ea3c1e18fc461b23f9d08ff6380b69dc1aadf29d6d4b32a2b58d792fe8" id=c02f5b60-1266-400d-89be-9b62125879c8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:51:51 addons-780397 crio[830]: time="2025-11-01T10:51:51.105881345Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45 Namespace:local-path-storage ID:c7ef83ea3c1e18fc461b23f9d08ff6380b69dc1aadf29d6d4b32a2b58d792fe8 UID:1f4a5526-fd0c-469a-96d8-b64e23fd03f2 NetNS:/var/run/netns/0a59c064-40fa-4eb6-967e-6f825cc03af9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d210}] Aliases:map[]}"
	Nov 01 10:51:51 addons-780397 crio[830]: time="2025-11-01T10:51:51.106028054Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45 from CNI network \"kindnet\" (type=ptp)"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	58a4ca27369d5       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   c7ef83ea3c1e1       helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45   local-path-storage
	852069938efd6       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   5b950e3444e3b       test-local-path                                              default
	da27ca2381b01       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   dd5adfe0f633a       helper-pod-create-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45   local-path-storage
	3a4e7e7d601bf       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago        Exited              registry-test                            0                   ec2e5e385d650       registry-test                                                default
	d1d217a3ef36f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   44395057aec6d       busybox                                                      default
	95c401b65b6d0       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          36 seconds ago       Running             csi-snapshotter                          0                   0bb25b4390104       csi-hostpathplugin-rcv72                                     kube-system
	9755d6ed77411       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          37 seconds ago       Running             csi-provisioner                          0                   0bb25b4390104       csi-hostpathplugin-rcv72                                     kube-system
	24eb361f78f37       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            39 seconds ago       Running             liveness-probe                           0                   0bb25b4390104       csi-hostpathplugin-rcv72                                     kube-system
	aa5242c774ec5       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           40 seconds ago       Running             hostpath                                 0                   0bb25b4390104       csi-hostpathplugin-rcv72                                     kube-system
	eb322b2b4d349       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             42 seconds ago       Running             controller                               0                   4a650f91b3880       ingress-nginx-controller-675c5ddd98-hs7kh                    ingress-nginx
	edf5a75a78d04       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 48 seconds ago       Running             gcp-auth                                 0                   26008fb2535d3       gcp-auth-78565c9fb4-cbqfl                                    gcp-auth
	53c4829891c5d       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             48 seconds ago       Exited              patch                                    3                   aefcaeaf5955f       gcp-auth-certs-patch-n6zkn                                   gcp-auth
	c5690aa550023       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                51 seconds ago       Running             node-driver-registrar                    0                   0bb25b4390104       csi-hostpathplugin-rcv72                                     kube-system
	d0f4a3d46de3d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            52 seconds ago       Running             gadget                                   0                   eb3fe8d85fcec       gadget-9w9vd                                                 gadget
	06297cda80172       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              56 seconds ago       Running             registry-proxy                           0                   dc76f27639aee       registry-proxy-w5qfc                                         kube-system
	109ca94f2ac60       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              59 seconds ago       Running             csi-resizer                              0                   83826fab6dc70       csi-hostpath-resizer-0                                       kube-system
	8c5122f8790f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   620dcfa26f384       snapshot-controller-7d9fbc56b8-4wv2x                         kube-system
	ae7007dc0baff       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   ab76fd43d3784       yakd-dashboard-5ff678cb9-pkd7z                               yakd-dashboard
	9226b4f612a88       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   c2b1db8174646       snapshot-controller-7d9fbc56b8-k9qvr                         kube-system
	e9f3d5cb96605       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              patch                                    0                   b33c3031aa6a8       ingress-nginx-admission-patch-gck89                          ingress-nginx
	37f3bb87ae1e0       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   408490db506c9       csi-hostpath-attacher-0                                      kube-system
	725ca44578089       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   0bb25b4390104       csi-hostpathplugin-rcv72                                     kube-system
	2f6544622dcfd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   0b18e5aec929b       ingress-nginx-admission-create-gmhvg                         ingress-nginx
	f570fa47b541d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   e48d1c67d745d       local-path-provisioner-648f6765c9-5rtmm                      local-path-storage
	de45b5e729e5c       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   23f26a2d1bfd3       nvidia-device-plugin-daemonset-wx5mc                         kube-system
	c7a8e262c1c24       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   e6c54ff21b1c3       cloud-spanner-emulator-86bd5cbb97-g4v8z                      default
	20dc20a6da2fd       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   fca14ba697e2a       kube-ingress-dns-minikube                                    kube-system
	ed4831c43c9c3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   0733afa0c1b95       registry-6b586f9694-px94l                                    kube-system
	eae7ef5c0407f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   8d0fda4883ae5       metrics-server-85b7d694d7-lzfmm                              kube-system
	c0ebe38f484ad       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   f5019695ad30e       storage-provisioner                                          kube-system
	63f495cb67067       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   040b9af5ad20c       coredns-66bc5c9577-k9m58                                     kube-system
	9219d1677a776       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   c27637e698530       kube-proxy-x5kx4                                             kube-system
	d1fceb6cb01a8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   3f3e55a8194a9       kindnet-lvd2k                                                kube-system
	45b9a03f6e493       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   5499795fbadc2       kube-scheduler-addons-780397                                 kube-system
	47b214409da44       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   76116c436e52a       kube-controller-manager-addons-780397                        kube-system
	1d05f7b649fbf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   1cdae0f5b4964       kube-apiserver-addons-780397                                 kube-system
	ee87b767b30b5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   b10fdb8c31f45       etcd-addons-780397                                           kube-system
	
	
	==> coredns [63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07] <==
	[INFO] 10.244.0.18:53473 - 18863 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001812063s
	[INFO] 10.244.0.18:53473 - 41547 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000245533s
	[INFO] 10.244.0.18:53473 - 63189 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00037576s
	[INFO] 10.244.0.18:57465 - 54963 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162103s
	[INFO] 10.244.0.18:57465 - 54750 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00023801s
	[INFO] 10.244.0.18:33156 - 12142 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106742s
	[INFO] 10.244.0.18:33156 - 11945 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091693s
	[INFO] 10.244.0.18:56171 - 60054 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083021s
	[INFO] 10.244.0.18:56171 - 59841 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000207626s
	[INFO] 10.244.0.18:57222 - 26390 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005768346s
	[INFO] 10.244.0.18:57222 - 26851 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005864233s
	[INFO] 10.244.0.18:43306 - 29307 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160322s
	[INFO] 10.244.0.18:43306 - 29462 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000149827s
	[INFO] 10.244.0.21:57470 - 34275 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000262133s
	[INFO] 10.244.0.21:58428 - 54324 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195195s
	[INFO] 10.244.0.21:56292 - 55666 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000168823s
	[INFO] 10.244.0.21:34167 - 4598 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000254215s
	[INFO] 10.244.0.21:59520 - 27548 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149565s
	[INFO] 10.244.0.21:38761 - 12195 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096124s
	[INFO] 10.244.0.21:58465 - 360 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002372719s
	[INFO] 10.244.0.21:45175 - 10089 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002543996s
	[INFO] 10.244.0.21:37461 - 3910 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001625993s
	[INFO] 10.244.0.21:49447 - 56419 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001789277s
	[INFO] 10.244.0.23:41165 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001066608s
	[INFO] 10.244.0.23:39984 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000184077s
	
	
	==> describe nodes <==
	Name:               addons-780397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-780397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=addons-780397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_49_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-780397
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-780397"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:49:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-780397
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:51:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:51:38 +0000   Sat, 01 Nov 2025 10:49:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:51:38 +0000   Sat, 01 Nov 2025 10:49:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:51:38 +0000   Sat, 01 Nov 2025 10:49:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:51:38 +0000   Sat, 01 Nov 2025 10:50:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-780397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                36b93fca-ca40-4c07-9468-4e940368c507
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     cloud-spanner-emulator-86bd5cbb97-g4v8z      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  gadget                      gadget-9w9vd                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  gcp-auth                    gcp-auth-78565c9fb4-cbqfl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-hs7kh    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m14s
	  kube-system                 coredns-66bc5c9577-k9m58                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 csi-hostpathplugin-rcv72                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 etcd-addons-780397                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-lvd2k                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-addons-780397                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-addons-780397        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-proxy-x5kx4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-addons-780397                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 metrics-server-85b7d694d7-lzfmm              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m16s
	  kube-system                 nvidia-device-plugin-daemonset-wx5mc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 registry-6b586f9694-px94l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 registry-creds-764b6fb674-dlcvd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 registry-proxy-w5qfc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 snapshot-controller-7d9fbc56b8-4wv2x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-k9qvr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  local-path-storage          local-path-provisioner-648f6765c9-5rtmm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-pkd7z               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node addons-780397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node addons-780397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node addons-780397 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node addons-780397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node addons-780397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node addons-780397 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m22s                  node-controller  Node addons-780397 event: Registered Node addons-780397 in Controller
	  Normal   NodeReady                98s                    kubelet          Node addons-780397 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 10:27] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[  +9.289237] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:40] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[ +12.370416] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d] <==
	{"level":"warn","ts":"2025-11-01T10:49:21.608037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.618660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.641883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.656394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.686299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.696093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.709893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.722503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.746054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.761180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.778936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.791819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.825931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.846701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.864758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.896361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.921571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:21.940378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:22.041902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:38.212883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:38.229228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:59.755258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:59.770457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:59.801756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:49:59.821881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41794","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [edf5a75a78d04e9d3c1cd09c9d4a5accd533078f752949605a4cba64e6501d81] <==
	2025/11/01 10:51:02 GCP Auth Webhook started!
	2025/11/01 10:51:17 Ready to marshal response ...
	2025/11/01 10:51:17 Ready to write response ...
	2025/11/01 10:51:17 Ready to marshal response ...
	2025/11/01 10:51:17 Ready to write response ...
	2025/11/01 10:51:17 Ready to marshal response ...
	2025/11/01 10:51:17 Ready to write response ...
	2025/11/01 10:51:39 Ready to marshal response ...
	2025/11/01 10:51:39 Ready to write response ...
	2025/11/01 10:51:40 Ready to marshal response ...
	2025/11/01 10:51:40 Ready to write response ...
	2025/11/01 10:51:40 Ready to marshal response ...
	2025/11/01 10:51:40 Ready to write response ...
	2025/11/01 10:51:49 Ready to marshal response ...
	2025/11/01 10:51:49 Ready to write response ...
	
	
	==> kernel <==
	 10:51:51 up  2:34,  0 user,  load average: 1.89, 2.84, 3.25
	Linux addons-780397 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf] <==
	E1101 10:50:03.257554       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:50:03.257576       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 10:50:04.257807       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:50:04.257864       1 metrics.go:72] Registering metrics
	I1101 10:50:04.257915       1 controller.go:711] "Syncing nftables rules"
	I1101 10:50:13.263976       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:50:13.264035       1 main.go:301] handling current node
	I1101 10:50:23.256911       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:50:23.256968       1 main.go:301] handling current node
	I1101 10:50:33.257894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:50:33.257942       1 main.go:301] handling current node
	I1101 10:50:43.256719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:50:43.256760       1 main.go:301] handling current node
	I1101 10:50:53.256801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:50:53.256838       1 main.go:301] handling current node
	I1101 10:51:03.256212       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:51:03.256242       1 main.go:301] handling current node
	I1101 10:51:13.256953       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:51:13.257001       1 main.go:301] handling current node
	I1101 10:51:23.255908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:51:23.255975       1 main.go:301] handling current node
	I1101 10:51:33.264969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:51:33.265070       1 main.go:301] handling current node
	I1101 10:51:43.256100       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:51:43.256208       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93] <==
	W1101 10:49:38.202857       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:49:38.223902       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1101 10:49:39.857214       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.145.123"}
	W1101 10:49:59.755058       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 10:49:59.769231       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 10:49:59.801656       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 10:49:59.818104       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 10:50:13.822786       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.145.123:443: connect: connection refused
	E1101 10:50:13.822834       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.145.123:443: connect: connection refused" logger="UnhandledError"
	W1101 10:50:13.823275       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.145.123:443: connect: connection refused
	E1101 10:50:13.823311       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.145.123:443: connect: connection refused" logger="UnhandledError"
	W1101 10:50:13.948222       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.145.123:443: connect: connection refused
	E1101 10:50:13.948349       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.145.123:443: connect: connection refused" logger="UnhandledError"
	W1101 10:50:19.586414       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 10:50:19.586542       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 10:50:19.587845       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.193.12:443: connect: connection refused" logger="UnhandledError"
	E1101 10:50:19.592942       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.193.12:443: connect: connection refused" logger="UnhandledError"
	E1101 10:50:19.595704       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.193.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.193.12:443: connect: connection refused" logger="UnhandledError"
	I1101 10:50:19.726054       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 10:51:27.660359       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45504: use of closed network connection
	E1101 10:51:27.894976       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45520: use of closed network connection
	E1101 10:51:28.027991       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45536: use of closed network connection
	
	
	==> kube-controller-manager [47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0] <==
	I1101 10:49:29.776092       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:49:29.777220       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:49:29.777265       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:49:29.784701       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:49:29.784802       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:49:29.784819       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:49:29.785351       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:49:29.785818       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-780397"
	I1101 10:49:29.785913       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:49:29.785286       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:49:29.786209       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:49:29.787352       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:49:29.788474       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:49:29.788646       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:49:29.789963       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:49:29.792934       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:49:29.800228       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	E1101 10:49:59.747121       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 10:49:59.747295       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 10:49:59.747338       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 10:49:59.790641       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 10:49:59.794979       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 10:49:59.847591       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:49:59.895567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:50:14.832245       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495] <==
	I1101 10:49:33.099962       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:49:33.200607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:49:33.301242       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:49:33.302255       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 10:49:33.302325       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:49:33.390387       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:49:33.390458       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:49:33.401869       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:49:33.408444       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:49:33.408478       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:49:33.415603       1 config.go:200] "Starting service config controller"
	I1101 10:49:33.415623       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:49:33.415644       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:49:33.415649       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:49:33.415666       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:49:33.415670       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:49:33.416330       1 config.go:309] "Starting node config controller"
	I1101 10:49:33.416353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:49:33.416359       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:49:33.515757       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:49:33.515800       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:49:33.515833       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6] <==
	I1101 10:49:23.271434       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:49:23.273768       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:49:23.273801       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:49:23.274254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:49:23.274372       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1101 10:49:23.282895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:49:23.292078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:49:23.292282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:49:23.292682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:49:23.292701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:49:23.292832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:49:23.293024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:49:23.293027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:49:23.293072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:49:23.293153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:49:23.293229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:49:23.293342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:49:23.293451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:49:23.293489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:49:23.293503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:49:23.293558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:49:23.293596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:49:23.293682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:49:23.293753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1101 10:49:24.974034       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:51:48 addons-780397 kubelet[1289]: I1101 10:51:48.238404    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b495772-a76f-41a9-9013-400a8c6dbc43-kube-api-access-4lm66" (OuterVolumeSpecName: "kube-api-access-4lm66") pod "8b495772-a76f-41a9-9013-400a8c6dbc43" (UID: "8b495772-a76f-41a9-9013-400a8c6dbc43"). InnerVolumeSpecName "kube-api-access-4lm66". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 10:51:48 addons-780397 kubelet[1289]: I1101 10:51:48.332479    1289 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8b495772-a76f-41a9-9013-400a8c6dbc43-gcp-creds\") on node \"addons-780397\" DevicePath \"\""
	Nov 01 10:51:48 addons-780397 kubelet[1289]: I1101 10:51:48.332674    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4lm66\" (UniqueName: \"kubernetes.io/projected/8b495772-a76f-41a9-9013-400a8c6dbc43-kube-api-access-4lm66\") on node \"addons-780397\" DevicePath \"\""
	Nov 01 10:51:48 addons-780397 kubelet[1289]: I1101 10:51:48.332750    1289 reconciler_common.go:299] "Volume detached for volume \"pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45\" (UniqueName: \"kubernetes.io/host-path/8b495772-a76f-41a9-9013-400a8c6dbc43-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45\") on node \"addons-780397\" DevicePath \"\""
	Nov 01 10:51:49 addons-780397 kubelet[1289]: I1101 10:51:49.090105    1289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b950e3444e3bf0c29a7724afeb0643cb656fd032469b86c11c599268d429df7"
	Nov 01 10:51:49 addons-780397 kubelet[1289]: E1101 10:51:49.092146    1289 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-780397\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-780397' and this object" podUID="8b495772-a76f-41a9-9013-400a8c6dbc43" pod="default/test-local-path"
	Nov 01 10:51:49 addons-780397 kubelet[1289]: I1101 10:51:49.140193    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-data\") pod \"helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45\" (UID: \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\") " pod="local-path-storage/helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45"
	Nov 01 10:51:49 addons-780397 kubelet[1289]: I1101 10:51:49.140460    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-script\") pod \"helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45\" (UID: \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\") " pod="local-path-storage/helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45"
	Nov 01 10:51:49 addons-780397 kubelet[1289]: I1101 10:51:49.140602    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwjsb\" (UniqueName: \"kubernetes.io/projected/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-kube-api-access-lwjsb\") pod \"helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45\" (UID: \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\") " pod="local-path-storage/helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45"
	Nov 01 10:51:49 addons-780397 kubelet[1289]: I1101 10:51:49.140765    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-gcp-creds\") pod \"helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45\" (UID: \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\") " pod="local-path-storage/helper-pod-delete-pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45"
	Nov 01 10:51:49 addons-780397 kubelet[1289]: I1101 10:51:49.214416    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b495772-a76f-41a9-9013-400a8c6dbc43" path="/var/lib/kubelet/pods/8b495772-a76f-41a9-9013-400a8c6dbc43/volumes"
	Nov 01 10:51:49 addons-780397 kubelet[1289]: W1101 10:51:49.365906    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7d2662ca9bdd04e73b2b644238c59e8c0ec7385c2e197de4cb030920e581a3c6/crio-c7ef83ea3c1e18fc461b23f9d08ff6380b69dc1aadf29d6d4b32a2b58d792fe8 WatchSource:0}: Error finding container c7ef83ea3c1e18fc461b23f9d08ff6380b69dc1aadf29d6d4b32a2b58d792fe8: Status 404 returned error can't find the container with id c7ef83ea3c1e18fc461b23f9d08ff6380b69dc1aadf29d6d4b32a2b58d792fe8
	Nov 01 10:51:50 addons-780397 kubelet[1289]: I1101 10:51:50.211159    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wx5mc" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.270548    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-script\") pod \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\" (UID: \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\") "
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.270610    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-gcp-creds\") pod \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\" (UID: \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\") "
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.270637    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-data\") pod \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\" (UID: \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\") "
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.270680    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwjsb\" (UniqueName: \"kubernetes.io/projected/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-kube-api-access-lwjsb\") pod \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\" (UID: \"1f4a5526-fd0c-469a-96d8-b64e23fd03f2\") "
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.271131    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1f4a5526-fd0c-469a-96d8-b64e23fd03f2" (UID: "1f4a5526-fd0c-469a-96d8-b64e23fd03f2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.271416    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-script" (OuterVolumeSpecName: "script") pod "1f4a5526-fd0c-469a-96d8-b64e23fd03f2" (UID: "1f4a5526-fd0c-469a-96d8-b64e23fd03f2"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.271447    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-data" (OuterVolumeSpecName: "data") pod "1f4a5526-fd0c-469a-96d8-b64e23fd03f2" (UID: "1f4a5526-fd0c-469a-96d8-b64e23fd03f2"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.276945    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-kube-api-access-lwjsb" (OuterVolumeSpecName: "kube-api-access-lwjsb") pod "1f4a5526-fd0c-469a-96d8-b64e23fd03f2" (UID: "1f4a5526-fd0c-469a-96d8-b64e23fd03f2"). InnerVolumeSpecName "kube-api-access-lwjsb". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.371815    1289 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-gcp-creds\") on node \"addons-780397\" DevicePath \"\""
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.371859    1289 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-data\") on node \"addons-780397\" DevicePath \"\""
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.371871    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lwjsb\" (UniqueName: \"kubernetes.io/projected/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-kube-api-access-lwjsb\") on node \"addons-780397\" DevicePath \"\""
	Nov 01 10:51:51 addons-780397 kubelet[1289]: I1101 10:51:51.371881    1289 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/1f4a5526-fd0c-469a-96d8-b64e23fd03f2-script\") on node \"addons-780397\" DevicePath \"\""
	
	
	==> storage-provisioner [c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0] <==
	W1101 10:51:27.732523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:29.736297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:29.741218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:31.744383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:31.748805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:33.752404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:33.759204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:35.762868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:35.766978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:37.769894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:37.774259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:39.777306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:39.781813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:41.784659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:41.789188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:43.792225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:43.796752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:45.800809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:45.806429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:47.810296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:47.817325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:49.821010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:49.837455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:51.840879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:51:51.847037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-780397 -n addons-780397
helpers_test.go:269: (dbg) Run:  kubectl --context addons-780397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-gmhvg ingress-nginx-admission-patch-gck89 registry-creds-764b6fb674-dlcvd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-780397 describe pod ingress-nginx-admission-create-gmhvg ingress-nginx-admission-patch-gck89 registry-creds-764b6fb674-dlcvd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-780397 describe pod ingress-nginx-admission-create-gmhvg ingress-nginx-admission-patch-gck89 registry-creds-764b6fb674-dlcvd: exit status 1 (84.695563ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gmhvg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gck89" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-dlcvd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-780397 describe pod ingress-nginx-admission-create-gmhvg ingress-nginx-admission-patch-gck89 registry-creds-764b6fb674-dlcvd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable headlamp --alsologtostderr -v=1: exit status 11 (268.392706ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:52.759586  542892 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:52.760376  542892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:52.760394  542892 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:52.760399  542892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:52.760709  542892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:51:52.761100  542892 mustload.go:66] Loading cluster: addons-780397
	I1101 10:51:52.761514  542892 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:52.761535  542892 addons.go:607] checking whether the cluster is paused
	I1101 10:51:52.761680  542892 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:52.761745  542892 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:51:52.762228  542892 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:51:52.781061  542892 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:52.781121  542892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:51:52.797509  542892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:51:52.900454  542892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:52.900537  542892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:52.940056  542892 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:51:52.940138  542892 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:51:52.940161  542892 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:51:52.940182  542892 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:51:52.940217  542892 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:51:52.940253  542892 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:51:52.940274  542892 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:51:52.940294  542892 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:51:52.940333  542892 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:51:52.940371  542892 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:51:52.940391  542892 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:51:52.940409  542892 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:51:52.940441  542892 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:51:52.940467  542892 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:51:52.940489  542892 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:51:52.940518  542892 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:51:52.940560  542892 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:51:52.940597  542892 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:51:52.940617  542892 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:51:52.940637  542892 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:51:52.940671  542892 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:51:52.940696  542892 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:51:52.940716  542892 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:51:52.940737  542892 cri.go:89] found id: ""
	I1101 10:51:52.940844  542892 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:52.957261  542892 out.go:203] 
	W1101 10:51:52.960235  542892 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:52.960279  542892 out.go:285] * 
	* 
	W1101 10:51:52.967390  542892 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:52.970349  542892 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-g4v8z" [7388b0e8-3ed8-4486-8132-a6c3f7ce1fed] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005820821s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (329.165689ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:49.763320  542388 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:49.764475  542388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:49.764549  542388 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:49.764570  542388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:49.765549  542388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:51:49.766096  542388 mustload.go:66] Loading cluster: addons-780397
	I1101 10:51:49.766603  542388 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:49.766622  542388 addons.go:607] checking whether the cluster is paused
	I1101 10:51:49.766739  542388 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:49.766750  542388 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:51:49.767233  542388 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:51:49.785331  542388 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:49.785389  542388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:51:49.827310  542388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:51:49.940546  542388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:49.940708  542388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:49.978379  542388 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:51:49.978398  542388 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:51:49.978404  542388 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:51:49.978407  542388 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:51:49.978411  542388 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:51:49.978415  542388 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:51:49.978418  542388 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:51:49.978422  542388 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:51:49.978425  542388 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:51:49.978434  542388 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:51:49.978438  542388 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:51:49.978441  542388 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:51:49.978444  542388 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:51:49.978447  542388 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:51:49.978450  542388 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:51:49.978458  542388 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:51:49.978462  542388 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:51:49.978467  542388 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:51:49.978470  542388 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:51:49.978473  542388 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:51:49.978477  542388 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:51:49.978480  542388 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:51:49.978489  542388 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:51:49.978493  542388 cri.go:89] found id: ""
	I1101 10:51:49.978542  542388 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:49.999372  542388 out.go:203] 
	W1101 10:51:50.004874  542388 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:50.004902  542388 out.go:285] * 
	* 
	W1101 10:51:50.014694  542388 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:50.017807  542388 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-780397 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-780397 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc test-pvc -o jsonpath={.status.phase} -n default
2025/11/01 10:51:44 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-780397 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8b495772-a76f-41a9-9013-400a8c6dbc43] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8b495772-a76f-41a9-9013-400a8c6dbc43] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8b495772-a76f-41a9-9013-400a8c6dbc43] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003803711s
addons_test.go:967: (dbg) Run:  kubectl --context addons-780397 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 ssh "cat /opt/local-path-provisioner/pvc-5807708d-69fc-4d9a-8cb5-d21e2a3cad45_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-780397 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-780397 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (284.221834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:49.126484  542257 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:49.127354  542257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:49.127368  542257 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:49.127374  542257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:49.127677  542257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:51:49.127994  542257 mustload.go:66] Loading cluster: addons-780397
	I1101 10:51:49.128354  542257 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:49.128389  542257 addons.go:607] checking whether the cluster is paused
	I1101 10:51:49.128497  542257 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:49.128506  542257 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:51:49.128955  542257 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:51:49.147491  542257 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:49.147551  542257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:51:49.167509  542257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:51:49.272318  542257 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:49.272407  542257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:49.305215  542257 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:51:49.305238  542257 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:51:49.305243  542257 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:51:49.305247  542257 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:51:49.305251  542257 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:51:49.305255  542257 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:51:49.305258  542257 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:51:49.305262  542257 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:51:49.305265  542257 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:51:49.305271  542257 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:51:49.305275  542257 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:51:49.305279  542257 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:51:49.305282  542257 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:51:49.305285  542257 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:51:49.305288  542257 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:51:49.305298  542257 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:51:49.305305  542257 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:51:49.305310  542257 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:51:49.305313  542257 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:51:49.305316  542257 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:51:49.305321  542257 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:51:49.305329  542257 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:51:49.305332  542257 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:51:49.305336  542257 cri.go:89] found id: ""
	I1101 10:51:49.305387  542257 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:49.327332  542257 out.go:203] 
	W1101 10:51:49.330821  542257 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:49.330898  542257 out.go:285] * 
	* 
	W1101 10:51:49.340268  542257 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:49.343838  542257 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-wx5mc" [340dc4bf-d4db-4446-8048-4ee8b6fae48e] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00510726s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (308.802057ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:39.653749  541808 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:39.654881  541808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:39.654898  541808 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:39.654904  541808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:39.655210  541808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:51:39.655545  541808 mustload.go:66] Loading cluster: addons-780397
	I1101 10:51:39.655969  541808 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:39.655989  541808 addons.go:607] checking whether the cluster is paused
	I1101 10:51:39.656130  541808 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:39.656150  541808 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:51:39.657635  541808 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:51:39.675093  541808 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:39.675157  541808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:51:39.694616  541808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:51:39.810183  541808 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:39.810271  541808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:39.848069  541808 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:51:39.848088  541808 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:51:39.848093  541808 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:51:39.848102  541808 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:51:39.848106  541808 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:51:39.848109  541808 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:51:39.848112  541808 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:51:39.848116  541808 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:51:39.848119  541808 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:51:39.848124  541808 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:51:39.848127  541808 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:51:39.848130  541808 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:51:39.848133  541808 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:51:39.848136  541808 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:51:39.848139  541808 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:51:39.848144  541808 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:51:39.848147  541808 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:51:39.848151  541808 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:51:39.848154  541808 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:51:39.848157  541808 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:51:39.848161  541808 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:51:39.848164  541808 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:51:39.848167  541808 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:51:39.848170  541808 cri.go:89] found id: ""
	I1101 10:51:39.848226  541808 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:39.864529  541808 out.go:203] 
	W1101 10:51:39.869901  541808 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:39.869941  541808 out.go:285] * 
	* 
	W1101 10:51:39.877051  541808 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:39.883055  541808 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.32s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-pkd7z" [fa8f973e-fe4d-4f39-99ef-b1fe34ffc053] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003939546s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-780397 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-780397 addons disable yakd --alsologtostderr -v=1: exit status 11 (269.628206ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:51:34.363729  541734 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:51:34.364581  541734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:34.364597  541734 out.go:374] Setting ErrFile to fd 2...
	I1101 10:51:34.364604  541734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:51:34.364882  541734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:51:34.365237  541734 mustload.go:66] Loading cluster: addons-780397
	I1101 10:51:34.365681  541734 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:34.365754  541734 addons.go:607] checking whether the cluster is paused
	I1101 10:51:34.365887  541734 config.go:182] Loaded profile config "addons-780397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:51:34.365905  541734 host.go:66] Checking if "addons-780397" exists ...
	I1101 10:51:34.366402  541734 cli_runner.go:164] Run: docker container inspect addons-780397 --format={{.State.Status}}
	I1101 10:51:34.384479  541734 ssh_runner.go:195] Run: systemctl --version
	I1101 10:51:34.384593  541734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-780397
	I1101 10:51:34.404107  541734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/addons-780397/id_rsa Username:docker}
	I1101 10:51:34.508105  541734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:51:34.508203  541734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:51:34.542795  541734 cri.go:89] found id: "95c401b65b6d0530202b415a657d91d26237ed08819ce2d69b65f5dd91182123"
	I1101 10:51:34.542827  541734 cri.go:89] found id: "9755d6ed774119c804b3eebb5b02aeece7b28897f6ff340b604884c75233f0e2"
	I1101 10:51:34.542833  541734 cri.go:89] found id: "24eb361f78f37246f9a44ad4cdb9b6d8ccdddffa6c036fd96a1602b2de47bfaa"
	I1101 10:51:34.542837  541734 cri.go:89] found id: "aa5242c774ec5436a7822920829bbd2ea980f64315bdc851cb5889baadc76840"
	I1101 10:51:34.542841  541734 cri.go:89] found id: "c5690aa550023b620c35c01edf2ddf7a01ceb7cd7780a3736b553c50b8fcfe48"
	I1101 10:51:34.542845  541734 cri.go:89] found id: "06297cda801728c4002a6cd372e4924b7516680933a0c99c519861d01bb88f52"
	I1101 10:51:34.542849  541734 cri.go:89] found id: "109ca94f2ac6029f9b123b5effd51bb3237ebe2ecad81ae1641e01a51e98ea4c"
	I1101 10:51:34.542853  541734 cri.go:89] found id: "8c5122f8790f08cf6b55fa037b76047238f3fb365a13158fa17a7554d7262fd8"
	I1101 10:51:34.542857  541734 cri.go:89] found id: "9226b4f612a88ad6c50508197926e9500a9c65ab67b3451068fb6d7f66f989bb"
	I1101 10:51:34.542863  541734 cri.go:89] found id: "37f3bb87ae1e00d4fee1add1b4841a53cd5f278d444dada5972c69fc513f4bd8"
	I1101 10:51:34.542867  541734 cri.go:89] found id: "725ca4457808990797c591167f1fa12d97cec642ae519d736a9040ba00d478bf"
	I1101 10:51:34.542871  541734 cri.go:89] found id: "de45b5e729e5ca028a98e33f23a9c4a13713de17423bae4088e35ef98da9f8c1"
	I1101 10:51:34.542874  541734 cri.go:89] found id: "20dc20a6da2fd486562650c9f23cf744e5f6532e2aaf2deeb6e00c2919339f82"
	I1101 10:51:34.542878  541734 cri.go:89] found id: "ed4831c43c9c32ae67ed66b1d2cbc7e02e743bf599b9443ab592fc96c49afa1f"
	I1101 10:51:34.542887  541734 cri.go:89] found id: "eae7ef5c0407f9b28d1c11bde72c2e6409a58184d080fb0e93a2aa79a8a22aa8"
	I1101 10:51:34.542897  541734 cri.go:89] found id: "c0ebe38f484ade4dd3056c4ff8e82e230c2538c811ca2a2b3412fd044a3ba1f0"
	I1101 10:51:34.542901  541734 cri.go:89] found id: "63f495cb67067eb809ce4d1fbe457005d0fdd3a9add81eb288784592112f9b07"
	I1101 10:51:34.542904  541734 cri.go:89] found id: "9219d1677a7762dc981afb60ef2efd8799a3a8b75b8d7369ab9ab6bb74936495"
	I1101 10:51:34.542908  541734 cri.go:89] found id: "d1fceb6cb01a80ba436a206561a6804a0190e261c7fe670ca99a2361c483acbf"
	I1101 10:51:34.542911  541734 cri.go:89] found id: "45b9a03f6e493ab3f1ea21607e00188fbdc35fef78dc099cc31011c52f5f5db6"
	I1101 10:51:34.542915  541734 cri.go:89] found id: "47b214409da4436362fb8e749ec0f87e7a6870a902511496159299e13103bca0"
	I1101 10:51:34.542918  541734 cri.go:89] found id: "1d05f7b649fbfac878ce793b29b976edf8426cdc24e2bbbcf9a5e1f44dddca93"
	I1101 10:51:34.542922  541734 cri.go:89] found id: "ee87b767b30b5bd965b6975d122c2db74d82564cc37042028b6c8e5fb2f4265d"
	I1101 10:51:34.542925  541734 cri.go:89] found id: ""
	I1101 10:51:34.542985  541734 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:51:34.557419  541734 out.go:203] 
	W1101 10:51:34.560484  541734 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:51:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:51:34.560510  541734 out.go:285] * 
	* 
	W1101 10:51:34.567600  541734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:51:34.570441  541734 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-780397 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-203469 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-203469 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-48kjc" [3d8d8677-d0e6-4148-9679-c0d3776df8fe] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-203469 -n functional-203469
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-01 11:08:35.764117575 +0000 UTC m=+1222.315178410
functional_test.go:1645: (dbg) Run:  kubectl --context functional-203469 describe po hello-node-connect-7d85dfc575-48kjc -n default
functional_test.go:1645: (dbg) kubectl --context functional-203469 describe po hello-node-connect-7d85dfc575-48kjc -n default:
Name:             hello-node-connect-7d85dfc575-48kjc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-203469/192.168.49.2
Start Time:       Sat, 01 Nov 2025 10:58:35 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tcsk7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tcsk7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-48kjc to functional-203469
Normal   Pulling    7m10s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-203469 logs hello-node-connect-7d85dfc575-48kjc -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-203469 logs hello-node-connect-7d85dfc575-48kjc -n default: exit status 1 (103.190516ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-48kjc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-203469 logs hello-node-connect-7d85dfc575-48kjc -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-203469 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-48kjc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-203469/192.168.49.2
Start Time:       Sat, 01 Nov 2025 10:58:35 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tcsk7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tcsk7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-48kjc to functional-203469
Normal   Pulling    7m11s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m11s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-203469 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-203469 logs -l app=hello-node-connect: exit status 1 (88.189226ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-48kjc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-203469 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-203469 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.212.8
IPs:                      10.106.212.8
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31684/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-203469
helpers_test.go:243: (dbg) docker inspect functional-203469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d7a18923611afa92a5382cac588def5f0ba88755385b77467fa061f192fcf1",
	        "Created": "2025-11-01T10:55:46.090266217Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 550462,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:55:46.149236596Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/53d7a18923611afa92a5382cac588def5f0ba88755385b77467fa061f192fcf1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d7a18923611afa92a5382cac588def5f0ba88755385b77467fa061f192fcf1/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d7a18923611afa92a5382cac588def5f0ba88755385b77467fa061f192fcf1/hosts",
	        "LogPath": "/var/lib/docker/containers/53d7a18923611afa92a5382cac588def5f0ba88755385b77467fa061f192fcf1/53d7a18923611afa92a5382cac588def5f0ba88755385b77467fa061f192fcf1-json.log",
	        "Name": "/functional-203469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-203469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-203469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d7a18923611afa92a5382cac588def5f0ba88755385b77467fa061f192fcf1",
	                "LowerDir": "/var/lib/docker/overlay2/1a384c40f99663000cbcdccaa5f241dbcb9c3cc3244700a24c5e424ef3de5980-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a384c40f99663000cbcdccaa5f241dbcb9c3cc3244700a24c5e424ef3de5980/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a384c40f99663000cbcdccaa5f241dbcb9c3cc3244700a24c5e424ef3de5980/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a384c40f99663000cbcdccaa5f241dbcb9c3cc3244700a24c5e424ef3de5980/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-203469",
	                "Source": "/var/lib/docker/volumes/functional-203469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-203469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-203469",
	                "name.minikube.sigs.k8s.io": "functional-203469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e9db60b2dd8c480ccaafee07559008846b4877224f180eba3fedecf99d6502c",
	            "SandboxKey": "/var/run/docker/netns/8e9db60b2dd8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-203469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:ed:d6:a0:9f:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03b913117b90267cd6ba02a44772c4dfa260c45fce0ec2faeaa108bce6f92383",
	                    "EndpointID": "8b0c1d34702d29c8942dd864dfbfb47763d521fe7ee03ee3b5b3a50dc1f22db9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-203469",
	                        "53d7a1892361"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-203469 -n functional-203469
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-203469 logs -n 25: (1.489359495s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-203469 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:57 UTC │ 01 Nov 25 10:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 01 Nov 25 10:57 UTC │ 01 Nov 25 10:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 01 Nov 25 10:57 UTC │ 01 Nov 25 10:57 UTC │
	│ kubectl │ functional-203469 kubectl -- --context functional-203469 get pods                                                          │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:57 UTC │ 01 Nov 25 10:57 UTC │
	│ start   │ -p functional-203469 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:57 UTC │ 01 Nov 25 10:58 UTC │
	│ service │ invalid-svc -p functional-203469                                                                                           │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │                     │
	│ cp      │ functional-203469 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ config  │ functional-203469 config unset cpus                                                                                        │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ config  │ functional-203469 config get cpus                                                                                          │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │                     │
	│ config  │ functional-203469 config set cpus 2                                                                                        │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ config  │ functional-203469 config get cpus                                                                                          │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ config  │ functional-203469 config unset cpus                                                                                        │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ ssh     │ functional-203469 ssh -n functional-203469 sudo cat /home/docker/cp-test.txt                                               │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ config  │ functional-203469 config get cpus                                                                                          │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │                     │
	│ ssh     │ functional-203469 ssh echo hello                                                                                           │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ cp      │ functional-203469 cp functional-203469:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1747816278/001/cp-test.txt │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ ssh     │ functional-203469 ssh cat /etc/hostname                                                                                    │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ ssh     │ functional-203469 ssh -n functional-203469 sudo cat /home/docker/cp-test.txt                                               │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ tunnel  │ functional-203469 tunnel --alsologtostderr                                                                                 │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │                     │
	│ tunnel  │ functional-203469 tunnel --alsologtostderr                                                                                 │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │                     │
	│ cp      │ functional-203469 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ ssh     │ functional-203469 ssh -n functional-203469 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ tunnel  │ functional-203469 tunnel --alsologtostderr                                                                                 │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │                     │
	│ addons  │ functional-203469 addons list                                                                                              │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	│ addons  │ functional-203469 addons list -o json                                                                                      │ functional-203469 │ jenkins │ v1.37.0 │ 01 Nov 25 10:58 UTC │ 01 Nov 25 10:58 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:57:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:57:37.964832  554673 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:57:37.964938  554673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:57:37.964942  554673 out.go:374] Setting ErrFile to fd 2...
	I1101 10:57:37.964946  554673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:57:37.965342  554673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:57:37.966044  554673 out.go:368] Setting JSON to false
	I1101 10:57:37.966919  554673 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9607,"bootTime":1761985051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:57:37.966976  554673 start.go:143] virtualization:  
	I1101 10:57:37.970433  554673 out.go:179] * [functional-203469] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:57:37.974241  554673 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:57:37.974342  554673 notify.go:221] Checking for updates...
	I1101 10:57:37.980055  554673 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:57:37.982962  554673 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 10:57:37.985864  554673 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 10:57:37.988687  554673 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:57:37.991528  554673 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:57:37.994913  554673 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:57:37.995058  554673 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:57:38.023004  554673 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:57:38.023105  554673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:57:38.090032  554673 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-01 10:57:38.079637629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:57:38.090127  554673 docker.go:319] overlay module found
	I1101 10:57:38.095011  554673 out.go:179] * Using the docker driver based on existing profile
	I1101 10:57:38.097860  554673 start.go:309] selected driver: docker
	I1101 10:57:38.097868  554673 start.go:930] validating driver "docker" against &{Name:functional-203469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-203469 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:57:38.097956  554673 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:57:38.098070  554673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:57:38.156377  554673 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-01 10:57:38.147500421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:57:38.156787  554673 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:57:38.156812  554673 cni.go:84] Creating CNI manager for ""
	I1101 10:57:38.156869  554673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:57:38.156908  554673 start.go:353] cluster config:
	{Name:functional-203469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-203469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:57:38.160064  554673 out.go:179] * Starting "functional-203469" primary control-plane node in "functional-203469" cluster
	I1101 10:57:38.162856  554673 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:57:38.165871  554673 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:57:38.168679  554673 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:57:38.168730  554673 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 10:57:38.168739  554673 cache.go:59] Caching tarball of preloaded images
	I1101 10:57:38.168761  554673 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:57:38.168843  554673 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 10:57:38.168853  554673 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:57:38.168981  554673 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/config.json ...
	I1101 10:57:38.188068  554673 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:57:38.188080  554673 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:57:38.188092  554673 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:57:38.188117  554673 start.go:360] acquireMachinesLock for functional-203469: {Name:mk680c89d979e17fc692de9393d17b947f7a1f3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:57:38.188188  554673 start.go:364] duration metric: took 50.971µs to acquireMachinesLock for "functional-203469"
	I1101 10:57:38.188208  554673 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:57:38.188212  554673 fix.go:54] fixHost starting: 
	I1101 10:57:38.188473  554673 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
	I1101 10:57:38.204759  554673 fix.go:112] recreateIfNeeded on functional-203469: state=Running err=<nil>
	W1101 10:57:38.204777  554673 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:57:38.208033  554673 out.go:252] * Updating the running docker "functional-203469" container ...
	I1101 10:57:38.208058  554673 machine.go:94] provisionDockerMachine start ...
	I1101 10:57:38.208135  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:57:38.225472  554673 main.go:143] libmachine: Using SSH client type: native
	I1101 10:57:38.225835  554673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1101 10:57:38.225843  554673 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:57:38.373196  554673 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-203469
	
	I1101 10:57:38.373210  554673 ubuntu.go:182] provisioning hostname "functional-203469"
	I1101 10:57:38.373269  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:57:38.390356  554673 main.go:143] libmachine: Using SSH client type: native
	I1101 10:57:38.390692  554673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1101 10:57:38.390707  554673 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-203469 && echo "functional-203469" | sudo tee /etc/hostname
	I1101 10:57:38.546795  554673 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-203469
	
	I1101 10:57:38.546869  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:57:38.564588  554673 main.go:143] libmachine: Using SSH client type: native
	I1101 10:57:38.564882  554673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1101 10:57:38.564897  554673 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-203469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-203469/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-203469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:57:38.714342  554673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:57:38.714356  554673 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 10:57:38.714376  554673 ubuntu.go:190] setting up certificates
	I1101 10:57:38.714393  554673 provision.go:84] configureAuth start
	I1101 10:57:38.714453  554673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-203469
	I1101 10:57:38.732730  554673 provision.go:143] copyHostCerts
	I1101 10:57:38.732797  554673 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 10:57:38.732811  554673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 10:57:38.732886  554673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 10:57:38.732977  554673 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 10:57:38.732981  554673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 10:57:38.733005  554673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 10:57:38.733058  554673 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 10:57:38.733061  554673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 10:57:38.733083  554673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 10:57:38.733127  554673 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.functional-203469 san=[127.0.0.1 192.168.49.2 functional-203469 localhost minikube]
	I1101 10:57:39.295590  554673 provision.go:177] copyRemoteCerts
	I1101 10:57:39.295641  554673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:57:39.295696  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:57:39.319374  554673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
	I1101 10:57:39.425606  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:57:39.443924  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:57:39.462840  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:57:39.480533  554673 provision.go:87] duration metric: took 766.125151ms to configureAuth
	I1101 10:57:39.480550  554673 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:57:39.480726  554673 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:57:39.480830  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:57:39.498620  554673 main.go:143] libmachine: Using SSH client type: native
	I1101 10:57:39.498915  554673 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1101 10:57:39.498927  554673 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:57:44.889567  554673 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:57:44.889580  554673 machine.go:97] duration metric: took 6.681515708s to provisionDockerMachine
	I1101 10:57:44.889589  554673 start.go:293] postStartSetup for "functional-203469" (driver="docker")
	I1101 10:57:44.889599  554673 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:57:44.889688  554673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:57:44.889750  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:57:44.908967  554673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
	I1101 10:57:45.026140  554673 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:57:45.033256  554673 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:57:45.033292  554673 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:57:45.033304  554673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 10:57:45.033992  554673 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 10:57:45.034192  554673 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 10:57:45.034332  554673 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/test/nested/copy/534720/hosts -> hosts in /etc/test/nested/copy/534720
	I1101 10:57:45.034415  554673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/534720
	I1101 10:57:45.051316  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 10:57:45.087877  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/test/nested/copy/534720/hosts --> /etc/test/nested/copy/534720/hosts (40 bytes)
	I1101 10:57:45.140368  554673 start.go:296] duration metric: took 250.76215ms for postStartSetup
	I1101 10:57:45.140456  554673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:57:45.140518  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:57:45.165553  554673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
	I1101 10:57:45.289921  554673 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:57:45.296707  554673 fix.go:56] duration metric: took 7.108485842s for fixHost
	I1101 10:57:45.296723  554673 start.go:83] releasing machines lock for "functional-203469", held for 7.108526894s
	I1101 10:57:45.296821  554673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-203469
	I1101 10:57:45.328306  554673 ssh_runner.go:195] Run: cat /version.json
	I1101 10:57:45.328367  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:57:45.328812  554673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:57:45.328899  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:57:45.359967  554673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
	I1101 10:57:45.369894  554673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
	I1101 10:57:45.469666  554673 ssh_runner.go:195] Run: systemctl --version
	I1101 10:57:45.560294  554673 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:57:45.600018  554673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:57:45.604704  554673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:57:45.604781  554673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:57:45.612932  554673 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:57:45.612947  554673 start.go:496] detecting cgroup driver to use...
	I1101 10:57:45.612979  554673 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 10:57:45.613027  554673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:57:45.629249  554673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:57:45.642475  554673 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:57:45.642537  554673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:57:45.658613  554673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:57:45.671966  554673 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:57:45.803831  554673 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:57:45.946412  554673 docker.go:234] disabling docker service ...
	I1101 10:57:45.946467  554673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:57:45.961410  554673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:57:45.974625  554673 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:57:46.111147  554673 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:57:46.260806  554673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:57:46.274134  554673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:57:46.288339  554673 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:57:46.288395  554673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:57:46.297967  554673 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:57:46.298039  554673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:57:46.307616  554673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:57:46.316094  554673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:57:46.324737  554673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:57:46.333056  554673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:57:46.342203  554673 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:57:46.350939  554673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:57:46.359660  554673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:57:46.366969  554673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:57:46.374276  554673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:57:46.508241  554673 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:57:53.638386  554673 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.130122915s)
	I1101 10:57:53.638403  554673 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:57:53.638456  554673 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:57:53.642413  554673 start.go:564] Will wait 60s for crictl version
	I1101 10:57:53.642465  554673 ssh_runner.go:195] Run: which crictl
	I1101 10:57:53.646161  554673 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:57:53.677599  554673 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:57:53.677720  554673 ssh_runner.go:195] Run: crio --version
	I1101 10:57:53.706228  554673 ssh_runner.go:195] Run: crio --version
	I1101 10:57:53.739955  554673 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:57:53.743002  554673 cli_runner.go:164] Run: docker network inspect functional-203469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:57:53.758741  554673 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 10:57:53.765738  554673 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1101 10:57:53.768537  554673 kubeadm.go:884] updating cluster {Name:functional-203469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-203469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:57:53.768666  554673 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:57:53.768744  554673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:57:53.807059  554673 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:57:53.807071  554673 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:57:53.807127  554673 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:57:53.833290  554673 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:57:53.833302  554673 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:57:53.833309  554673 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1101 10:57:53.833415  554673 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-203469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-203469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:57:53.833500  554673 ssh_runner.go:195] Run: crio config
	I1101 10:57:53.897726  554673 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1101 10:57:53.897747  554673 cni.go:84] Creating CNI manager for ""
	I1101 10:57:53.897756  554673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:57:53.897770  554673 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:57:53.897806  554673 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-203469 NodeName:functional-203469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:57:53.897930  554673 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-203469"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:57:53.897999  554673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:57:53.906193  554673 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:57:53.906258  554673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:57:53.914094  554673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:57:53.927646  554673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:57:53.940987  554673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1101 10:57:53.954499  554673 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:57:53.958495  554673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:57:54.117012  554673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:57:54.130110  554673 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469 for IP: 192.168.49.2
	I1101 10:57:54.130121  554673 certs.go:195] generating shared ca certs ...
	I1101 10:57:54.130135  554673 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:57:54.130270  554673 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 10:57:54.130309  554673 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 10:57:54.130315  554673 certs.go:257] generating profile certs ...
	I1101 10:57:54.130397  554673 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.key
	I1101 10:57:54.130442  554673 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/apiserver.key.faf58f27
	I1101 10:57:54.130476  554673 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/proxy-client.key
	I1101 10:57:54.130589  554673 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 10:57:54.130615  554673 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 10:57:54.130622  554673 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 10:57:54.130652  554673 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:57:54.130679  554673 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:57:54.130700  554673 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 10:57:54.130739  554673 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 10:57:54.131374  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:57:54.149657  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:57:54.167282  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:57:54.187533  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:57:54.207354  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:57:54.228403  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:57:54.246678  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:57:54.266016  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:57:54.284467  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 10:57:54.302807  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 10:57:54.320720  554673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:57:54.338588  554673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:57:54.351294  554673 ssh_runner.go:195] Run: openssl version
	I1101 10:57:54.357532  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 10:57:54.366116  554673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 10:57:54.369835  554673 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 10:57:54.369905  554673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 10:57:54.411084  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:57:54.419115  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:57:54.427882  554673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:57:54.431997  554673 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:57:54.432059  554673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:57:54.473392  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:57:54.481469  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 10:57:54.489990  554673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 10:57:54.494095  554673 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 10:57:54.494167  554673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 10:57:54.535369  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 10:57:54.543898  554673 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:57:54.548638  554673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:57:54.591694  554673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:57:54.636176  554673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:57:54.677357  554673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:57:54.718532  554673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:57:54.759602  554673 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:57:54.800687  554673 kubeadm.go:401] StartCluster: {Name:functional-203469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-203469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:57:54.800765  554673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:57:54.800828  554673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:57:54.829783  554673 cri.go:89] found id: "da9f929cc3dd216a6b0c7dc4165fc56ffb614be3c3666386a298584759515ad6"
	I1101 10:57:54.829794  554673 cri.go:89] found id: "6a9aebe2f6c8050694e258b695e7c1f36a1cea8e7e6adb4c36ff89366afc6b57"
	I1101 10:57:54.829797  554673 cri.go:89] found id: "fe94331fe9651715cbdec902c9f4f1fb3ce4e5fb5210844d370829e38fbc8cb2"
	I1101 10:57:54.829799  554673 cri.go:89] found id: "689c08f865bbd7c20f39ae40e5a2893bc3dc244e3ca1e40d57b6807900526572"
	I1101 10:57:54.829802  554673 cri.go:89] found id: "ae5aaabf53937789e7d118af1b9812a5524fae14f743ef161d697366ebe6418b"
	I1101 10:57:54.829805  554673 cri.go:89] found id: "7ec6eadbb29b4ae550135b39815a25145d8d3c9bdc9956de93037c736cfa39cb"
	I1101 10:57:54.829807  554673 cri.go:89] found id: "0d17dfd8f2bb4617bbfb1e67b4ba059b66b0df17fec72f88e47616538dc94694"
	I1101 10:57:54.829809  554673 cri.go:89] found id: "312a1ed3f389264d6a723ae78d9e1f31f7191b3d09799dced3ad23a053992150"
	I1101 10:57:54.829811  554673 cri.go:89] found id: "8eca32f460da47eda7e78f843367a357f6fff2461895d4fab879bb6a0265ff2f"
	I1101 10:57:54.829818  554673 cri.go:89] found id: "bc3f6730bbf7ad296c75ec818f9ac658e6544de4644bf8a81865cc987023f1c0"
	I1101 10:57:54.829820  554673 cri.go:89] found id: "222b57fd8fd294b906963a612c314577b7772b480f845ab47c8d621e9a936495"
	I1101 10:57:54.829831  554673 cri.go:89] found id: "73c9a18bd8c0a6549d6be09e41dd0b6b24c3c08e36a499a342f6a4898e96406a"
	I1101 10:57:54.829834  554673 cri.go:89] found id: "495d1ca3d0e46c5a4296e3ca8b7e87fc05a2426b0dccc53cecab89b8a68ea4b4"
	I1101 10:57:54.829836  554673 cri.go:89] found id: "8c2e762df71af93b4581468d80570fc178e871cc390e3427ffbeaebdd290e4fa"
	I1101 10:57:54.829838  554673 cri.go:89] found id: "52a8334e9bf66ff09e2882f30e06759ddec21121c9584da0dabce1349a0d4442"
	I1101 10:57:54.829843  554673 cri.go:89] found id: "5acd41732ec1fcdba6faabd0a21eebbf36f9bbe86f0189b954824666d6c9fc44"
	I1101 10:57:54.829845  554673 cri.go:89] found id: ""
	I1101 10:57:54.829897  554673 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:57:54.840626  554673 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:57:54Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:57:54.840714  554673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:57:54.848903  554673 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:57:54.848917  554673 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:57:54.848970  554673 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:57:54.856714  554673 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:57:54.857236  554673 kubeconfig.go:125] found "functional-203469" server: "https://192.168.49.2:8441"
	I1101 10:57:54.858638  554673 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:57:54.866617  554673 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-01 10:55:55.806594561 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-01 10:57:53.948225264 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1101 10:57:54.866626  554673 kubeadm.go:1161] stopping kube-system containers ...
	I1101 10:57:54.866637  554673 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 10:57:54.866690  554673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:57:54.894731  554673 cri.go:89] found id: "da9f929cc3dd216a6b0c7dc4165fc56ffb614be3c3666386a298584759515ad6"
	I1101 10:57:54.894741  554673 cri.go:89] found id: "6a9aebe2f6c8050694e258b695e7c1f36a1cea8e7e6adb4c36ff89366afc6b57"
	I1101 10:57:54.894744  554673 cri.go:89] found id: "fe94331fe9651715cbdec902c9f4f1fb3ce4e5fb5210844d370829e38fbc8cb2"
	I1101 10:57:54.894747  554673 cri.go:89] found id: "689c08f865bbd7c20f39ae40e5a2893bc3dc244e3ca1e40d57b6807900526572"
	I1101 10:57:54.894750  554673 cri.go:89] found id: "ae5aaabf53937789e7d118af1b9812a5524fae14f743ef161d697366ebe6418b"
	I1101 10:57:54.894754  554673 cri.go:89] found id: "7ec6eadbb29b4ae550135b39815a25145d8d3c9bdc9956de93037c736cfa39cb"
	I1101 10:57:54.894757  554673 cri.go:89] found id: "0d17dfd8f2bb4617bbfb1e67b4ba059b66b0df17fec72f88e47616538dc94694"
	I1101 10:57:54.894759  554673 cri.go:89] found id: "312a1ed3f389264d6a723ae78d9e1f31f7191b3d09799dced3ad23a053992150"
	I1101 10:57:54.894761  554673 cri.go:89] found id: "8eca32f460da47eda7e78f843367a357f6fff2461895d4fab879bb6a0265ff2f"
	I1101 10:57:54.894768  554673 cri.go:89] found id: "bc3f6730bbf7ad296c75ec818f9ac658e6544de4644bf8a81865cc987023f1c0"
	I1101 10:57:54.894781  554673 cri.go:89] found id: "222b57fd8fd294b906963a612c314577b7772b480f845ab47c8d621e9a936495"
	I1101 10:57:54.894783  554673 cri.go:89] found id: "73c9a18bd8c0a6549d6be09e41dd0b6b24c3c08e36a499a342f6a4898e96406a"
	I1101 10:57:54.894785  554673 cri.go:89] found id: "495d1ca3d0e46c5a4296e3ca8b7e87fc05a2426b0dccc53cecab89b8a68ea4b4"
	I1101 10:57:54.894787  554673 cri.go:89] found id: "8c2e762df71af93b4581468d80570fc178e871cc390e3427ffbeaebdd290e4fa"
	I1101 10:57:54.894789  554673 cri.go:89] found id: "52a8334e9bf66ff09e2882f30e06759ddec21121c9584da0dabce1349a0d4442"
	I1101 10:57:54.894794  554673 cri.go:89] found id: "5acd41732ec1fcdba6faabd0a21eebbf36f9bbe86f0189b954824666d6c9fc44"
	I1101 10:57:54.894796  554673 cri.go:89] found id: ""
	I1101 10:57:54.894800  554673 cri.go:252] Stopping containers: [da9f929cc3dd216a6b0c7dc4165fc56ffb614be3c3666386a298584759515ad6 6a9aebe2f6c8050694e258b695e7c1f36a1cea8e7e6adb4c36ff89366afc6b57 fe94331fe9651715cbdec902c9f4f1fb3ce4e5fb5210844d370829e38fbc8cb2 689c08f865bbd7c20f39ae40e5a2893bc3dc244e3ca1e40d57b6807900526572 ae5aaabf53937789e7d118af1b9812a5524fae14f743ef161d697366ebe6418b 7ec6eadbb29b4ae550135b39815a25145d8d3c9bdc9956de93037c736cfa39cb 0d17dfd8f2bb4617bbfb1e67b4ba059b66b0df17fec72f88e47616538dc94694 312a1ed3f389264d6a723ae78d9e1f31f7191b3d09799dced3ad23a053992150 8eca32f460da47eda7e78f843367a357f6fff2461895d4fab879bb6a0265ff2f bc3f6730bbf7ad296c75ec818f9ac658e6544de4644bf8a81865cc987023f1c0 222b57fd8fd294b906963a612c314577b7772b480f845ab47c8d621e9a936495 73c9a18bd8c0a6549d6be09e41dd0b6b24c3c08e36a499a342f6a4898e96406a 495d1ca3d0e46c5a4296e3ca8b7e87fc05a2426b0dccc53cecab89b8a68ea4b4 8c2e762df71af93b4581468d80570fc178e871cc390e3427ffbeaebdd290e4fa 52a8334e9bf66ff09e2882f30e06759ddec21121c
9584da0dabce1349a0d4442 5acd41732ec1fcdba6faabd0a21eebbf36f9bbe86f0189b954824666d6c9fc44]
	I1101 10:57:54.894860  554673 ssh_runner.go:195] Run: which crictl
	I1101 10:57:54.898753  554673 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 da9f929cc3dd216a6b0c7dc4165fc56ffb614be3c3666386a298584759515ad6 6a9aebe2f6c8050694e258b695e7c1f36a1cea8e7e6adb4c36ff89366afc6b57 fe94331fe9651715cbdec902c9f4f1fb3ce4e5fb5210844d370829e38fbc8cb2 689c08f865bbd7c20f39ae40e5a2893bc3dc244e3ca1e40d57b6807900526572 ae5aaabf53937789e7d118af1b9812a5524fae14f743ef161d697366ebe6418b 7ec6eadbb29b4ae550135b39815a25145d8d3c9bdc9956de93037c736cfa39cb 0d17dfd8f2bb4617bbfb1e67b4ba059b66b0df17fec72f88e47616538dc94694 312a1ed3f389264d6a723ae78d9e1f31f7191b3d09799dced3ad23a053992150 8eca32f460da47eda7e78f843367a357f6fff2461895d4fab879bb6a0265ff2f bc3f6730bbf7ad296c75ec818f9ac658e6544de4644bf8a81865cc987023f1c0 222b57fd8fd294b906963a612c314577b7772b480f845ab47c8d621e9a936495 73c9a18bd8c0a6549d6be09e41dd0b6b24c3c08e36a499a342f6a4898e96406a 495d1ca3d0e46c5a4296e3ca8b7e87fc05a2426b0dccc53cecab89b8a68ea4b4 8c2e762df71af93b4581468d80570fc178e871cc390e3427ffbeaebdd290e4fa 52a833
4e9bf66ff09e2882f30e06759ddec21121c9584da0dabce1349a0d4442 5acd41732ec1fcdba6faabd0a21eebbf36f9bbe86f0189b954824666d6c9fc44
	I1101 10:57:55.006730  554673 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 10:57:55.136637  554673 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:57:55.145017  554673 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Nov  1 10:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Nov  1 10:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov  1 10:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov  1 10:56 /etc/kubernetes/scheduler.conf
	
	I1101 10:57:55.145078  554673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1101 10:57:55.153372  554673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1101 10:57:55.161781  554673 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:57:55.161855  554673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:57:55.169863  554673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1101 10:57:55.178248  554673 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:57:55.178306  554673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:57:55.186121  554673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1101 10:57:55.193935  554673 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:57:55.193992  554673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:57:55.201401  554673 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:57:55.209276  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:57:55.257300  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:57:58.677403  554673 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.420075829s)
	I1101 10:57:58.677480  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:57:58.906773  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:57:58.965297  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:57:59.056951  554673 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:57:59.057026  554673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:57:59.558074  554673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:58:00.057602  554673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:58:00.075413  554673 api_server.go:72] duration metric: took 1.018472458s to wait for apiserver process to appear ...
	I1101 10:58:00.075429  554673 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:58:00.075452  554673 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 10:58:03.616408  554673 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:58:03.616425  554673 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:58:03.616437  554673 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 10:58:03.743478  554673 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:58:03.743499  554673 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:58:04.075711  554673 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 10:58:04.083982  554673 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:58:04.084000  554673 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:58:04.575539  554673 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 10:58:04.584278  554673 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:58:04.584297  554673 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:58:05.076078  554673 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 10:58:05.084414  554673 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1101 10:58:05.098167  554673 api_server.go:141] control plane version: v1.34.1
	I1101 10:58:05.098184  554673 api_server.go:131] duration metric: took 5.022750028s to wait for apiserver health ...
	I1101 10:58:05.098197  554673 cni.go:84] Creating CNI manager for ""
	I1101 10:58:05.098203  554673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:58:05.101631  554673 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:58:05.104583  554673 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:58:05.108994  554673 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:58:05.109005  554673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:58:05.123532  554673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:58:05.644770  554673 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:58:05.656306  554673 system_pods.go:59] 8 kube-system pods found
	I1101 10:58:05.656335  554673 system_pods.go:61] "coredns-66bc5c9577-7vkrj" [1669ddba-8877-4f5f-864a-a9367c62e3ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:58:05.656342  554673 system_pods.go:61] "etcd-functional-203469" [580a631c-696e-4ce1-bad9-50137fad0bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:58:05.656347  554673 system_pods.go:61] "kindnet-q7tmb" [343f7ee2-dee4-4187-81dc-b96d6ac5c666] Running
	I1101 10:58:05.656358  554673 system_pods.go:61] "kube-apiserver-functional-203469" [a06c23c2-d183-48fa-8943-97576b8996fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:58:05.656370  554673 system_pods.go:61] "kube-controller-manager-functional-203469" [b641b147-3845-467c-95bd-1192395a28ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:58:05.656375  554673 system_pods.go:61] "kube-proxy-wlm8x" [38bfbc57-c111-43c5-b71b-e01ba071a9d1] Running
	I1101 10:58:05.656380  554673 system_pods.go:61] "kube-scheduler-functional-203469" [db676a46-829c-4f01-93a0-4cc1c8464f0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:58:05.656384  554673 system_pods.go:61] "storage-provisioner" [e6b24502-fa7b-4cbb-a904-fd3e804802a9] Running
	I1101 10:58:05.656390  554673 system_pods.go:74] duration metric: took 11.608923ms to wait for pod list to return data ...
	I1101 10:58:05.656397  554673 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:58:05.667758  554673 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:58:05.667777  554673 node_conditions.go:123] node cpu capacity is 2
	I1101 10:58:05.667788  554673 node_conditions.go:105] duration metric: took 11.387545ms to run NodePressure ...
	I1101 10:58:05.667871  554673 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:58:05.958870  554673 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 10:58:05.962383  554673 kubeadm.go:744] kubelet initialised
	I1101 10:58:05.962394  554673 kubeadm.go:745] duration metric: took 3.511802ms waiting for restarted kubelet to initialise ...
	I1101 10:58:05.962411  554673 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:58:05.973063  554673 ops.go:34] apiserver oom_adj: -16
	I1101 10:58:05.973075  554673 kubeadm.go:602] duration metric: took 11.124153163s to restartPrimaryControlPlane
	I1101 10:58:05.973083  554673 kubeadm.go:403] duration metric: took 11.172421542s to StartCluster
	I1101 10:58:05.973098  554673 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:58:05.973160  554673 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 10:58:05.973876  554673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:58:05.974092  554673 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:58:05.974354  554673 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:58:05.974389  554673 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:58:05.974449  554673 addons.go:70] Setting storage-provisioner=true in profile "functional-203469"
	I1101 10:58:05.974461  554673 addons.go:239] Setting addon storage-provisioner=true in "functional-203469"
	W1101 10:58:05.974466  554673 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:58:05.974486  554673 host.go:66] Checking if "functional-203469" exists ...
	I1101 10:58:05.974553  554673 addons.go:70] Setting default-storageclass=true in profile "functional-203469"
	I1101 10:58:05.974567  554673 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-203469"
	I1101 10:58:05.974864  554673 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
	I1101 10:58:05.974919  554673 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
	I1101 10:58:05.977433  554673 out.go:179] * Verifying Kubernetes components...
	I1101 10:58:05.980431  554673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:58:06.004342  554673 addons.go:239] Setting addon default-storageclass=true in "functional-203469"
	W1101 10:58:06.004353  554673 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:58:06.004377  554673 host.go:66] Checking if "functional-203469" exists ...
	I1101 10:58:06.004932  554673 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
	I1101 10:58:06.020648  554673 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:58:06.025792  554673 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:58:06.025804  554673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:58:06.025881  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:58:06.059830  554673 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:58:06.059846  554673 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:58:06.059911  554673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 10:58:06.071040  554673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
	I1101 10:58:06.095315  554673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
	I1101 10:58:06.210788  554673 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:58:06.211183  554673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:58:06.229921  554673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:58:06.235344  554673 node_ready.go:35] waiting up to 6m0s for node "functional-203469" to be "Ready" ...
	I1101 10:58:06.238546  554673 node_ready.go:49] node "functional-203469" is "Ready"
	I1101 10:58:06.238562  554673 node_ready.go:38] duration metric: took 3.1996ms for node "functional-203469" to be "Ready" ...
	I1101 10:58:06.238573  554673 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:58:06.238624  554673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:58:07.067617  554673 api_server.go:72] duration metric: took 1.093502362s to wait for apiserver process to appear ...
	I1101 10:58:07.067627  554673 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:58:07.067641  554673 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1101 10:58:07.082782  554673 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1101 10:58:07.083842  554673 api_server.go:141] control plane version: v1.34.1
	I1101 10:58:07.083855  554673 api_server.go:131] duration metric: took 16.223213ms to wait for apiserver health ...
	I1101 10:58:07.083863  554673 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:58:07.090540  554673 system_pods.go:59] 8 kube-system pods found
	I1101 10:58:07.090559  554673 system_pods.go:61] "coredns-66bc5c9577-7vkrj" [1669ddba-8877-4f5f-864a-a9367c62e3ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:58:07.090565  554673 system_pods.go:61] "etcd-functional-203469" [580a631c-696e-4ce1-bad9-50137fad0bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:58:07.090570  554673 system_pods.go:61] "kindnet-q7tmb" [343f7ee2-dee4-4187-81dc-b96d6ac5c666] Running
	I1101 10:58:07.090575  554673 system_pods.go:61] "kube-apiserver-functional-203469" [a06c23c2-d183-48fa-8943-97576b8996fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:58:07.090581  554673 system_pods.go:61] "kube-controller-manager-functional-203469" [b641b147-3845-467c-95bd-1192395a28ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:58:07.090586  554673 system_pods.go:61] "kube-proxy-wlm8x" [38bfbc57-c111-43c5-b71b-e01ba071a9d1] Running
	I1101 10:58:07.090591  554673 system_pods.go:61] "kube-scheduler-functional-203469" [db676a46-829c-4f01-93a0-4cc1c8464f0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:58:07.090596  554673 system_pods.go:61] "storage-provisioner" [e6b24502-fa7b-4cbb-a904-fd3e804802a9] Running
	I1101 10:58:07.090601  554673 system_pods.go:74] duration metric: took 6.734418ms to wait for pod list to return data ...
	I1101 10:58:07.090608  554673 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:58:07.090832  554673 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:58:07.093954  554673 addons.go:515] duration metric: took 1.119548594s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:58:07.095542  554673 default_sa.go:45] found service account: "default"
	I1101 10:58:07.095560  554673 default_sa.go:55] duration metric: took 4.947555ms for default service account to be created ...
	I1101 10:58:07.095567  554673 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:58:07.099450  554673 system_pods.go:86] 8 kube-system pods found
	I1101 10:58:07.099467  554673 system_pods.go:89] "coredns-66bc5c9577-7vkrj" [1669ddba-8877-4f5f-864a-a9367c62e3ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:58:07.099475  554673 system_pods.go:89] "etcd-functional-203469" [580a631c-696e-4ce1-bad9-50137fad0bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:58:07.099479  554673 system_pods.go:89] "kindnet-q7tmb" [343f7ee2-dee4-4187-81dc-b96d6ac5c666] Running
	I1101 10:58:07.099485  554673 system_pods.go:89] "kube-apiserver-functional-203469" [a06c23c2-d183-48fa-8943-97576b8996fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:58:07.099490  554673 system_pods.go:89] "kube-controller-manager-functional-203469" [b641b147-3845-467c-95bd-1192395a28ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:58:07.099494  554673 system_pods.go:89] "kube-proxy-wlm8x" [38bfbc57-c111-43c5-b71b-e01ba071a9d1] Running
	I1101 10:58:07.099499  554673 system_pods.go:89] "kube-scheduler-functional-203469" [db676a46-829c-4f01-93a0-4cc1c8464f0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:58:07.099502  554673 system_pods.go:89] "storage-provisioner" [e6b24502-fa7b-4cbb-a904-fd3e804802a9] Running
	I1101 10:58:07.099508  554673 system_pods.go:126] duration metric: took 3.936973ms to wait for k8s-apps to be running ...
	I1101 10:58:07.099514  554673 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:58:07.099568  554673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:58:07.114884  554673 system_svc.go:56] duration metric: took 15.359661ms WaitForService to wait for kubelet
	I1101 10:58:07.114902  554673 kubeadm.go:587] duration metric: took 1.140790133s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:58:07.114919  554673 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:58:07.120910  554673 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 10:58:07.120925  554673 node_conditions.go:123] node cpu capacity is 2
	I1101 10:58:07.120935  554673 node_conditions.go:105] duration metric: took 6.011799ms to run NodePressure ...
	I1101 10:58:07.120946  554673 start.go:242] waiting for startup goroutines ...
	I1101 10:58:07.120953  554673 start.go:247] waiting for cluster config update ...
	I1101 10:58:07.120962  554673 start.go:256] writing updated cluster config ...
	I1101 10:58:07.121274  554673 ssh_runner.go:195] Run: rm -f paused
	I1101 10:58:07.124828  554673 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:58:07.188462  554673 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7vkrj" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:58:09.195039  554673 pod_ready.go:104] pod "coredns-66bc5c9577-7vkrj" is not "Ready", error: <nil>
	I1101 10:58:11.701180  554673 pod_ready.go:94] pod "coredns-66bc5c9577-7vkrj" is "Ready"
	I1101 10:58:11.701194  554673 pod_ready.go:86] duration metric: took 4.512719447s for pod "coredns-66bc5c9577-7vkrj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:11.703710  554673 pod_ready.go:83] waiting for pod "etcd-functional-203469" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:58:13.708777  554673 pod_ready.go:104] pod "etcd-functional-203469" is not "Ready", error: <nil>
	I1101 10:58:14.708811  554673 pod_ready.go:94] pod "etcd-functional-203469" is "Ready"
	I1101 10:58:14.708826  554673 pod_ready.go:86] duration metric: took 3.005103326s for pod "etcd-functional-203469" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:14.711212  554673 pod_ready.go:83] waiting for pod "kube-apiserver-functional-203469" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:14.715484  554673 pod_ready.go:94] pod "kube-apiserver-functional-203469" is "Ready"
	I1101 10:58:14.715497  554673 pod_ready.go:86] duration metric: took 4.273273ms for pod "kube-apiserver-functional-203469" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:14.717740  554673 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-203469" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:14.722320  554673 pod_ready.go:94] pod "kube-controller-manager-functional-203469" is "Ready"
	I1101 10:58:14.722334  554673 pod_ready.go:86] duration metric: took 4.582964ms for pod "kube-controller-manager-functional-203469" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:14.724651  554673 pod_ready.go:83] waiting for pod "kube-proxy-wlm8x" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:14.907301  554673 pod_ready.go:94] pod "kube-proxy-wlm8x" is "Ready"
	I1101 10:58:14.907316  554673 pod_ready.go:86] duration metric: took 182.651981ms for pod "kube-proxy-wlm8x" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:15.108434  554673 pod_ready.go:83] waiting for pod "kube-scheduler-functional-203469" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:15.508108  554673 pod_ready.go:94] pod "kube-scheduler-functional-203469" is "Ready"
	I1101 10:58:15.508123  554673 pod_ready.go:86] duration metric: took 399.664146ms for pod "kube-scheduler-functional-203469" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:58:15.508134  554673 pod_ready.go:40] duration metric: took 8.383284872s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:58:15.562380  554673 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 10:58:15.565768  554673 out.go:179] * Done! kubectl is now configured to use "functional-203469" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:58:53 functional-203469 crio[3576]: time="2025-11-01T10:58:53.109222179Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-m7jwd Namespace:default ID:67fe46aa6a93bbd9cc7f97aea32aa4a828a235012729782bf2e0d8560d7c7975 UID:9d7c253c-1421-43e0-902e-c3c9ef14e0cc NetNS:/var/run/netns/85cdef81-9126-415d-a350-c2b1f2d713ac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000789c0}] Aliases:map[]}"
	Nov 01 10:58:53 functional-203469 crio[3576]: time="2025-11-01T10:58:53.109391141Z" level=info msg="Checking pod default_hello-node-75c85bcc94-m7jwd for CNI network kindnet (type=ptp)"
	Nov 01 10:58:53 functional-203469 crio[3576]: time="2025-11-01T10:58:53.113498832Z" level=info msg="Ran pod sandbox 67fe46aa6a93bbd9cc7f97aea32aa4a828a235012729782bf2e0d8560d7c7975 with infra container: default/hello-node-75c85bcc94-m7jwd/POD" id=5c70222d-f03d-4028-813c-661d25785b0a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:58:53 functional-203469 crio[3576]: time="2025-11-01T10:58:53.115063432Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8f6c2af5-e4a3-48ff-887d-42250eed9ec7 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.195231404Z" level=info msg="Stopping pod sandbox: 27e8392a725cc8cbc5841914837259c8c28d51559a48d1936a67c97d181b18da" id=3814df8f-2a24-4ebb-b614-8f042987f517 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.195284844Z" level=info msg="Stopped pod sandbox (already stopped): 27e8392a725cc8cbc5841914837259c8c28d51559a48d1936a67c97d181b18da" id=3814df8f-2a24-4ebb-b614-8f042987f517 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.19602979Z" level=info msg="Removing pod sandbox: 27e8392a725cc8cbc5841914837259c8c28d51559a48d1936a67c97d181b18da" id=1fd55bb2-51b6-4cd0-997c-d09fcbdec77c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.201864126Z" level=info msg="Removed pod sandbox: 27e8392a725cc8cbc5841914837259c8c28d51559a48d1936a67c97d181b18da" id=1fd55bb2-51b6-4cd0-997c-d09fcbdec77c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.207943471Z" level=info msg="Stopping pod sandbox: edbce46b73f421c3db8bb32400a9e5b7c25c025b7c0f220746955a18e5ea48c5" id=95e3d13d-dbc1-4d9b-a92a-7521e4c6d778 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.208007152Z" level=info msg="Stopped pod sandbox (already stopped): edbce46b73f421c3db8bb32400a9e5b7c25c025b7c0f220746955a18e5ea48c5" id=95e3d13d-dbc1-4d9b-a92a-7521e4c6d778 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.21172022Z" level=info msg="Removing pod sandbox: edbce46b73f421c3db8bb32400a9e5b7c25c025b7c0f220746955a18e5ea48c5" id=271bf84b-3425-4302-a74c-02f98868f20f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.219917411Z" level=info msg="Removed pod sandbox: edbce46b73f421c3db8bb32400a9e5b7c25c025b7c0f220746955a18e5ea48c5" id=271bf84b-3425-4302-a74c-02f98868f20f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.226518125Z" level=info msg="Stopping pod sandbox: 704eb37f7588b34768e40f74796ec6ec0e3193a21c0ac84359b2315a8a90b7a3" id=35842e35-6ac3-4ce3-a206-b8c278deabe1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.226572066Z" level=info msg="Stopped pod sandbox (already stopped): 704eb37f7588b34768e40f74796ec6ec0e3193a21c0ac84359b2315a8a90b7a3" id=35842e35-6ac3-4ce3-a206-b8c278deabe1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.231844048Z" level=info msg="Removing pod sandbox: 704eb37f7588b34768e40f74796ec6ec0e3193a21c0ac84359b2315a8a90b7a3" id=393cff1a-5db8-41a3-82af-67d96bf3068d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 10:58:59 functional-203469 crio[3576]: time="2025-11-01T10:58:59.235901038Z" level=info msg="Removed pod sandbox: 704eb37f7588b34768e40f74796ec6ec0e3193a21c0ac84359b2315a8a90b7a3" id=393cff1a-5db8-41a3-82af-67d96bf3068d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 10:59:07 functional-203469 crio[3576]: time="2025-11-01T10:59:07.027973177Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8f77ed96-639b-4849-9dbe-1b7f74d9ed9e name=/runtime.v1.ImageService/PullImage
	Nov 01 10:59:12 functional-203469 crio[3576]: time="2025-11-01T10:59:12.027775147Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=62e432f2-72e7-4604-a205-29f5983424c1 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:59:31 functional-203469 crio[3576]: time="2025-11-01T10:59:31.028304156Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9d28e12c-8c65-4f21-a6e4-334113bb4cf6 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:00:04 functional-203469 crio[3576]: time="2025-11-01T11:00:04.028439983Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ee9b6ca4-7396-4005-825e-640c13c0b113 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:00:13 functional-203469 crio[3576]: time="2025-11-01T11:00:13.028854639Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5f25e4aa-5c43-4cb5-bdb6-3a0419cbf415 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:01:25 functional-203469 crio[3576]: time="2025-11-01T11:01:25.028599575Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=354b711a-6b33-450c-b113-a9e4dd24c1f0 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:01:40 functional-203469 crio[3576]: time="2025-11-01T11:01:40.036116624Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8de736b1-8868-46b1-88b5-f95ebdd8d221 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:04:12 functional-203469 crio[3576]: time="2025-11-01T11:04:12.028600351Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b57041ce-f084-4d9d-84ba-e18e00ee81ab name=/runtime.v1.ImageService/PullImage
	Nov 01 11:04:23 functional-203469 crio[3576]: time="2025-11-01T11:04:23.027879045Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=518df980-bdb7-493b-b105-5e54c5b5761c name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fc1365909a0cf       docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424   9 minutes ago       Running             myfrontend                0                   53d22d5cf29d7       sp-pod                                      default
	544be8052a880       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   6fba5f44a5a0f       nginx-svc                                   default
	158a41bd96398       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   be482be6fdf94       kindnet-q7tmb                               kube-system
	311d77deccd13       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   4ee9ed4e63157       kube-proxy-wlm8x                            kube-system
	0e1cc2252f773       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   e02853fa56bb5       coredns-66bc5c9577-7vkrj                    kube-system
	4b19ac069e73e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   32721298d4f70       storage-provisioner                         kube-system
	6a012e8341bae       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   a84ac82f8c564       kube-apiserver-functional-203469            kube-system
	da98804012b6d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   3e9166eb329ef       kube-scheduler-functional-203469            kube-system
	03403e636152c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   fde50367ac53d       kube-controller-manager-functional-203469   kube-system
	25f6cfb65604f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   110478a803483       etcd-functional-203469                      kube-system
	da9f929cc3dd2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   32721298d4f70       storage-provisioner                         kube-system
	6a9aebe2f6c80       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   110478a803483       etcd-functional-203469                      kube-system
	fe94331fe9651       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   fde50367ac53d       kube-controller-manager-functional-203469   kube-system
	689c08f865bbd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   3e9166eb329ef       kube-scheduler-functional-203469            kube-system
	0d17dfd8f2bb4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   e02853fa56bb5       coredns-66bc5c9577-7vkrj                    kube-system
	312a1ed3f3892       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   4ee9ed4e63157       kube-proxy-wlm8x                            kube-system
	8eca32f460da4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   be482be6fdf94       kindnet-q7tmb                               kube-system
	
	
	==> coredns [0d17dfd8f2bb4617bbfb1e67b4ba059b66b0df17fec72f88e47616538dc94694] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45488 - 54948 "HINFO IN 1444869478336541398.8546918880119614850. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025616131s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [0e1cc2252f773059815e70c8b54f14e42fdab583cdc50f348fc835fcf31260c6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33998 - 50753 "HINFO IN 409664298170193206.8808752781714992292. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.035716806s
	
	
	==> describe nodes <==
	Name:               functional-203469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-203469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=functional-203469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_56_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:56:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-203469
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:08:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:07:57 +0000   Sat, 01 Nov 2025 10:56:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:07:57 +0000   Sat, 01 Nov 2025 10:56:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:07:57 +0000   Sat, 01 Nov 2025 10:56:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:07:57 +0000   Sat, 01 Nov 2025 10:56:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-203469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                2763cc90-6c53-4162-9ecd-c89752ce2775
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-m7jwd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  default                     hello-node-connect-7d85dfc575-48kjc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-7vkrj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-203469                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-q7tmb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-203469             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-203469    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-wlm8x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-203469             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-203469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-203469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-203469 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-203469 event: Registered Node functional-203469 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-203469 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-203469 event: Registered Node functional-203469 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-203469 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-203469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-203469 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-203469 event: Registered Node functional-203469 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:30] overlayfs: idmapped layers are currently not supported
	[ +47.648915] overlayfs: idmapped layers are currently not supported
	[  +9.344673] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[  +9.289237] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:40] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[ +12.370416] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:55] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [25f6cfb65604fc4f24e15a8e6bd3f8b1c8ee02b5fc39d87fe8d2d68418358729] <==
	{"level":"warn","ts":"2025-11-01T10:58:02.304980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.329172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.351927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.365868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.384640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.400086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.425335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.435448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.458384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.476887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.521785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.534522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.565205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.606714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.646598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.686584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.707094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.723673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.749584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.776424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.786491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:58:02.845782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32992","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T11:08:01.489930Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1110}
	{"level":"info","ts":"2025-11-01T11:08:01.513975Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1110,"took":"23.641062ms","hash":3700526026,"current-db-size-bytes":3239936,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1392640,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-01T11:08:01.514026Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3700526026,"revision":1110,"compact-revision":-1}
	
	
	==> etcd [6a9aebe2f6c8050694e258b695e7c1f36a1cea8e7e6adb4c36ff89366afc6b57] <==
	{"level":"warn","ts":"2025-11-01T10:57:15.984685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:57:16.006633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:57:16.018901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:57:16.092003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:57:16.102893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:57:16.119353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:57:16.145673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34174","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:57:39.668197Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:57:39.668266Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-203469","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:57:39.668369Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:57:39.813074Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:57:39.813157Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T10:57:39.813312Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:57:39.813390Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:57:39.813425Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T10:57:39.813527Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:57:39.813574Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:57:39.813608Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:57:39.813731Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-01T10:57:39.813804Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T10:57:39.813852Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:57:39.817588Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T10:57:39.817670Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:57:39.817756Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T10:57:39.817786Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-203469","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:08:37 up  2:51,  0 user,  load average: 0.28, 0.42, 1.42
	Linux functional-203469 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [158a41bd96398efd81f17f606ea3db8c3e14bc4ed4fdc2846ea96162c4128374] <==
	I1101 11:06:34.730424       1 main.go:301] handling current node
	I1101 11:06:44.734034       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:06:44.734067       1 main.go:301] handling current node
	I1101 11:06:54.737775       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:06:54.737809       1 main.go:301] handling current node
	I1101 11:07:04.729440       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:07:04.729548       1 main.go:301] handling current node
	I1101 11:07:14.729432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:07:14.729559       1 main.go:301] handling current node
	I1101 11:07:24.734795       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:07:24.734830       1 main.go:301] handling current node
	I1101 11:07:34.729792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:07:34.729859       1 main.go:301] handling current node
	I1101 11:07:44.735276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:07:44.735312       1 main.go:301] handling current node
	I1101 11:07:54.735388       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:07:54.735425       1 main.go:301] handling current node
	I1101 11:08:04.734011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:08:04.734108       1 main.go:301] handling current node
	I1101 11:08:14.737858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:08:14.737896       1 main.go:301] handling current node
	I1101 11:08:24.733840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:08:24.733878       1 main.go:301] handling current node
	I1101 11:08:34.729800       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:08:34.729837       1 main.go:301] handling current node
	
	
	==> kindnet [8eca32f460da47eda7e78f843367a357f6fff2461895d4fab879bb6a0265ff2f] <==
	I1101 10:57:13.524795       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:57:13.525770       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1101 10:57:13.525938       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:57:13.525951       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:57:13.525961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:57:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:57:13.797559       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:57:13.797649       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:57:13.797685       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:57:13.808736       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:57:13.817798       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:57:13.818163       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:57:13.818330       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:57:13.818485       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:57:16.942781       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:57:16.942816       1 metrics.go:72] Registering metrics
	I1101 10:57:16.942889       1 controller.go:711] "Syncing nftables rules"
	I1101 10:57:23.797223       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:57:23.797415       1 main.go:301] handling current node
	I1101 10:57:33.797934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:57:33.797979       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6a012e8341baed76a16a5d57b288633549f052ca640eaa022c3a17b928eae00c] <==
	I1101 10:58:03.897343       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:58:03.897775       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:58:03.897996       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:58:03.898153       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:58:03.898194       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:58:03.898223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:58:03.898253       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:58:03.898481       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:58:03.898540       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:58:03.931274       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:58:04.096218       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:58:04.540713       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:58:05.632905       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:58:05.852083       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:58:05.927040       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:58:05.934229       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:58:07.330017       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:58:07.381880       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:58:07.433397       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:58:18.971891       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.127.1"}
	I1101 10:58:25.666009       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.185.249"}
	I1101 10:58:35.403406       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.212.8"}
	E1101 10:58:45.351554       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1101 10:58:52.859685       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.185.189"}
	I1101 11:08:03.830423       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [03403e636152cba7fde426df05f573719e3a0986ab7b6290769059b3e144bf8c] <==
	I1101 10:58:07.081378       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:58:07.082931       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:58:07.085329       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:58:07.088202       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:58:07.088294       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:58:07.088335       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:58:07.088354       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:58:07.088358       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:58:07.088363       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:58:07.088426       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:58:07.088457       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:58:07.088476       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:58:07.095471       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:58:07.095627       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:58:07.096047       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-203469"
	I1101 10:58:07.096135       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:58:07.100617       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:58:07.104127       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:58:07.104183       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:58:07.109912       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:58:07.110030       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:58:07.124426       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:58:07.124657       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:58:07.124665       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:58:07.126902       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-controller-manager [fe94331fe9651715cbdec902c9f4f1fb3ce4e5fb5210844d370829e38fbc8cb2] <==
	I1101 10:57:19.638899       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:57:19.642141       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:57:19.642293       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:57:19.642374       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:57:19.642431       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:57:19.642462       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:57:19.645004       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:57:19.647642       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:57:19.649872       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:57:19.651147       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:57:19.652211       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:57:19.655355       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:57:19.655452       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:57:19.655476       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:57:19.655483       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:57:19.655538       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:57:19.655605       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:57:19.655383       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:57:19.655563       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:57:19.657226       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:57:19.662120       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:57:19.667442       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:57:19.669731       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:57:19.672003       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:57:19.674201       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-proxy [311d77deccd133cb5f1cad2f66521d78dc940d0389e497b6d997a049464449b8] <==
	I1101 10:58:04.493655       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:58:04.591689       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:58:04.693186       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:58:04.693235       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 10:58:04.693333       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:58:04.722146       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:58:04.722211       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:58:04.746351       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:58:04.746706       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:58:04.746732       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:58:04.749839       1 config.go:200] "Starting service config controller"
	I1101 10:58:04.749927       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:58:04.751757       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:58:04.751836       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:58:04.751880       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:58:04.751930       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:58:04.752400       1 config.go:309] "Starting node config controller"
	I1101 10:58:04.752462       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:58:04.752492       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:58:04.850245       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:58:04.853811       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:58:04.853904       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [312a1ed3f389264d6a723ae78d9e1f31f7191b3d09799dced3ad23a053992150] <==
	I1101 10:57:13.698077       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:57:14.886515       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:57:17.065777       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:57:17.085760       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 10:57:17.085856       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:57:17.310798       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:57:17.310851       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:57:17.323585       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:57:17.323942       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:57:17.324157       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:57:17.325505       1 config.go:200] "Starting service config controller"
	I1101 10:57:17.325568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:57:17.325609       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:57:17.325637       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:57:17.325673       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:57:17.325790       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:57:17.326492       1 config.go:309] "Starting node config controller"
	I1101 10:57:17.326546       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:57:17.326576       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:57:17.425814       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:57:17.425893       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:57:17.428346       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [689c08f865bbd7c20f39ae40e5a2893bc3dc244e3ca1e40d57b6807900526572] <==
	I1101 10:57:15.021391       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:57:16.763917       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:57:16.763952       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:57:16.763963       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:57:16.763970       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:57:16.973181       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:57:16.973215       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:57:16.988005       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:57:16.988086       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:57:16.988102       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:57:16.988120       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:57:17.288208       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:57:39.671837       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:57:39.671862       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:57:39.671884       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:57:39.671905       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:57:39.672069       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:57:39.672103       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [da98804012b6dd966315d1aa408b4d3e5e8cbaebe8a38e21eb6d2cfdd0eaaa57] <==
	I1101 10:58:02.193034       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:58:03.806191       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:58:03.806298       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:58:03.806332       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:58:03.806378       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:58:03.843006       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:58:03.845152       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:58:03.848950       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:58:03.848998       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:58:03.851697       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:58:03.851774       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:58:03.949623       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 11:06:02 functional-203469 kubelet[3887]: E1101 11:06:02.028101    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:06:05 functional-203469 kubelet[3887]: E1101 11:06:05.028439    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:06:16 functional-203469 kubelet[3887]: E1101 11:06:16.028111    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:06:20 functional-203469 kubelet[3887]: E1101 11:06:20.028211    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:06:28 functional-203469 kubelet[3887]: E1101 11:06:28.028330    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:06:31 functional-203469 kubelet[3887]: E1101 11:06:31.028514    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:06:43 functional-203469 kubelet[3887]: E1101 11:06:43.028447    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:06:43 functional-203469 kubelet[3887]: E1101 11:06:43.028868    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:06:54 functional-203469 kubelet[3887]: E1101 11:06:54.028024    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:06:57 functional-203469 kubelet[3887]: E1101 11:06:57.028264    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:07:05 functional-203469 kubelet[3887]: E1101 11:07:05.028387    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:07:11 functional-203469 kubelet[3887]: E1101 11:07:11.028187    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:07:16 functional-203469 kubelet[3887]: E1101 11:07:16.027826    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:07:23 functional-203469 kubelet[3887]: E1101 11:07:23.028264    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:07:31 functional-203469 kubelet[3887]: E1101 11:07:31.028010    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:07:36 functional-203469 kubelet[3887]: E1101 11:07:36.028494    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:07:45 functional-203469 kubelet[3887]: E1101 11:07:45.028942    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:07:48 functional-203469 kubelet[3887]: E1101 11:07:48.028318    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:07:57 functional-203469 kubelet[3887]: E1101 11:07:57.028648    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:08:03 functional-203469 kubelet[3887]: E1101 11:08:03.027984    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:08:09 functional-203469 kubelet[3887]: E1101 11:08:09.028454    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:08:14 functional-203469 kubelet[3887]: E1101 11:08:14.028027    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:08:21 functional-203469 kubelet[3887]: E1101 11:08:21.028132    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	Nov 01 11:08:29 functional-203469 kubelet[3887]: E1101 11:08:29.028499    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-m7jwd" podUID="9d7c253c-1421-43e0-902e-c3c9ef14e0cc"
	Nov 01 11:08:36 functional-203469 kubelet[3887]: E1101 11:08:36.027890    3887 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-48kjc" podUID="3d8d8677-d0e6-4148-9679-c0d3776df8fe"
	
	
	==> storage-provisioner [4b19ac069e73ebbbd65912545b16b90a92dc4b091ccca890fcf4784342ead5d5] <==
	W1101 11:08:12.597371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:14.601231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:14.605962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:16.609454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:16.616217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:18.619944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:18.624819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:20.627440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:20.632027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:22.635343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:22.641677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:24.644172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:24.648534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:26.651636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:26.655960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:28.658993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:28.663299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:30.665985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:30.672808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:32.675606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:32.680572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:34.683916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:34.689929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:36.698254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:08:36.705676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [da9f929cc3dd216a6b0c7dc4165fc56ffb614be3c3666386a298584759515ad6] <==
	I1101 10:57:26.255537       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:57:26.272084       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:57:26.272146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:57:26.274646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:29.729737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:33.990330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:37.588263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-203469 -n functional-203469
helpers_test.go:269: (dbg) Run:  kubectl --context functional-203469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-m7jwd hello-node-connect-7d85dfc575-48kjc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-203469 describe pod hello-node-75c85bcc94-m7jwd hello-node-connect-7d85dfc575-48kjc
helpers_test.go:290: (dbg) kubectl --context functional-203469 describe pod hello-node-75c85bcc94-m7jwd hello-node-connect-7d85dfc575-48kjc:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-m7jwd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-203469/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 10:58:52 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7kr6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p7kr6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m45s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-m7jwd to functional-203469
	  Normal   Pulling    6m58s (x5 over 9m45s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m58s (x5 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m58s (x5 over 9m45s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m42s (x20 over 9m45s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m28s (x21 over 9m45s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-48kjc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-203469/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 10:58:35 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tcsk7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tcsk7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-48kjc to functional-203469
	  Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-203469 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-203469 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-m7jwd" [9d7c253c-1421-43e0-902e-c3c9ef14e0cc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1101 10:59:01.387832  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:01:17.525530  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:01:45.229491  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:06:17.525519  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-203469 -n functional-203469
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 11:08:53.330558035 +0000 UTC m=+1239.881618878
functional_test.go:1460: (dbg) Run:  kubectl --context functional-203469 describe po hello-node-75c85bcc94-m7jwd -n default
functional_test.go:1460: (dbg) kubectl --context functional-203469 describe po hello-node-75c85bcc94-m7jwd -n default:
Name:             hello-node-75c85bcc94-m7jwd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-203469/192.168.49.2
Start Time:       Sat, 01 Nov 2025 10:58:52 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7kr6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-p7kr6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-m7jwd to functional-203469
Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-203469 logs hello-node-75c85bcc94-m7jwd -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-203469 logs hello-node-75c85bcc94-m7jwd -n default: exit status 1 (116.00684ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-m7jwd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-203469 logs hello-node-75c85bcc94-m7jwd -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 service --namespace=default --https --url hello-node: exit status 115 (514.489837ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31129
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-203469 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 service hello-node --url --format={{.IP}}: exit status 115 (482.998023ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-203469 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 service hello-node --url: exit status 115 (517.276021ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31129
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-203469 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31129
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image load --daemon kicbase/echo-server:functional-203469 --alsologtostderr
2025/11/01 11:09:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-203469" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image load --daemon kicbase/echo-server:functional-203469 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-203469" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-203469
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image load --daemon kicbase/echo-server:functional-203469 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-203469" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image save kicbase/echo-server:functional-203469 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1101 11:09:07.369507  563112 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:09:07.373069  563112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:09:07.373090  563112 out.go:374] Setting ErrFile to fd 2...
	I1101 11:09:07.373097  563112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:09:07.373438  563112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:09:07.376346  563112 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:07.376501  563112 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:07.376960  563112 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
	I1101 11:09:07.401525  563112 ssh_runner.go:195] Run: systemctl --version
	I1101 11:09:07.401593  563112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
	I1101 11:09:07.436199  563112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
	I1101 11:09:07.544690  563112 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1101 11:09:07.544768  563112 cache_images.go:255] Failed to load cached images for "functional-203469": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1101 11:09:07.544790  563112 cache_images.go:267] failed pushing to: functional-203469

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-203469
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image save --daemon kicbase/echo-server:functional-203469 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-203469
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-203469: exit status 1 (21.45815ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-203469

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-203469

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (506.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 node start m02 --alsologtostderr -v 5
E1101 11:14:47.172286  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:09.093876  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:17.525756  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:25.235207  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:52.936096  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:21:17.526196  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 node start m02 --alsologtostderr -v 5: exit status 80 (7m40.476306193s)

                                                
                                                
-- stdout --
	* Starting "ha-472819-m02" control-plane node in "ha-472819" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:14:19.063426  578608 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:14:19.065069  578608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:14:19.065085  578608 out.go:374] Setting ErrFile to fd 2...
	I1101 11:14:19.065090  578608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:14:19.065373  578608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:14:19.065682  578608 mustload.go:66] Loading cluster: ha-472819
	I1101 11:14:19.066126  578608 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:14:19.066650  578608 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	W1101 11:14:19.084995  578608 host.go:58] "ha-472819-m02" host status: Stopped
	I1101 11:14:19.088166  578608 out.go:179] * Starting "ha-472819-m02" control-plane node in "ha-472819" cluster
	I1101 11:14:19.091257  578608 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:14:19.093781  578608 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:14:19.097456  578608 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:14:19.097529  578608 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 11:14:19.097582  578608 cache.go:59] Caching tarball of preloaded images
	I1101 11:14:19.097667  578608 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:14:19.097735  578608 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:14:19.097882  578608 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:14:19.098053  578608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:14:19.118256  578608 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:14:19.118281  578608 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:14:19.118299  578608 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:14:19.118321  578608 start.go:360] acquireMachinesLock for ha-472819-m02: {Name:mkd9b09c2f5958eb6cf9785ab2b809fc6e14102e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:14:19.118458  578608 start.go:364] duration metric: took 50.175µs to acquireMachinesLock for "ha-472819-m02"
	I1101 11:14:19.118488  578608 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:14:19.118494  578608 fix.go:54] fixHost starting: m02
	I1101 11:14:19.118842  578608 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:14:19.136992  578608 fix.go:112] recreateIfNeeded on ha-472819-m02: state=Stopped err=<nil>
	W1101 11:14:19.137019  578608 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:14:19.141672  578608 out.go:252] * Restarting existing docker container for "ha-472819-m02" ...
	I1101 11:14:19.141898  578608 cli_runner.go:164] Run: docker start ha-472819-m02
	I1101 11:14:19.447546  578608 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:14:19.469678  578608 kic.go:430] container "ha-472819-m02" state is running.
	I1101 11:14:19.470098  578608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:14:19.496990  578608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:14:19.497255  578608 machine.go:94] provisionDockerMachine start ...
	I1101 11:14:19.497327  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:14:19.522101  578608 main.go:143] libmachine: Using SSH client type: native
	I1101 11:14:19.522419  578608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1101 11:14:19.522431  578608 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:14:19.522988  578608 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33628->127.0.0.1:33530: read: connection reset by peer
	I1101 11:14:22.726329  578608 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m02
	
	I1101 11:14:22.726353  578608 ubuntu.go:182] provisioning hostname "ha-472819-m02"
	I1101 11:14:22.726429  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:14:22.747804  578608 main.go:143] libmachine: Using SSH client type: native
	I1101 11:14:22.748117  578608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1101 11:14:22.748136  578608 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-472819-m02 && echo "ha-472819-m02" | sudo tee /etc/hostname
	I1101 11:14:22.974004  578608 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m02
	
	I1101 11:14:22.974165  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:14:23.009894  578608 main.go:143] libmachine: Using SSH client type: native
	I1101 11:14:23.010196  578608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1101 11:14:23.010213  578608 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472819-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472819-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472819-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:14:23.203282  578608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:14:23.203309  578608 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:14:23.203349  578608 ubuntu.go:190] setting up certificates
	I1101 11:14:23.203368  578608 provision.go:84] configureAuth start
	I1101 11:14:23.203470  578608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:14:23.229781  578608 provision.go:143] copyHostCerts
	I1101 11:14:23.229820  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:14:23.229856  578608 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:14:23.229869  578608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:14:23.229940  578608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:14:23.230038  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:14:23.230055  578608 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:14:23.230060  578608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:14:23.230086  578608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:14:23.230134  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:14:23.230149  578608 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:14:23.230153  578608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:14:23.230194  578608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:14:23.230260  578608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.ha-472819-m02 san=[127.0.0.1 192.168.49.3 ha-472819-m02 localhost minikube]
	I1101 11:14:23.696986  578608 provision.go:177] copyRemoteCerts
	I1101 11:14:23.697050  578608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:14:23.697090  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:14:23.715985  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:14:23.829102  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 11:14:23.829166  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:14:23.853759  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 11:14:23.853826  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 11:14:23.876820  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 11:14:23.876882  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:14:23.898148  578608 provision.go:87] duration metric: took 694.758613ms to configureAuth
	I1101 11:14:23.898175  578608 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:14:23.898416  578608 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:14:23.898529  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:14:23.934362  578608 main.go:143] libmachine: Using SSH client type: native
	I1101 11:14:23.934677  578608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1101 11:14:23.934698  578608 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:14:24.939753  578608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:14:24.939789  578608 machine.go:97] duration metric: took 5.442523797s to provisionDockerMachine
	I1101 11:14:24.939801  578608 start.go:293] postStartSetup for "ha-472819-m02" (driver="docker")
	I1101 11:14:24.939811  578608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:14:24.939891  578608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:14:24.939949  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:14:24.960944  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:14:25.074594  578608 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:14:25.078639  578608 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:14:25.078670  578608 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:14:25.078683  578608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:14:25.078749  578608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:14:25.078838  578608 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:14:25.078852  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /etc/ssl/certs/5347202.pem
	I1101 11:14:25.079008  578608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:14:25.090682  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:14:25.114535  578608 start.go:296] duration metric: took 174.718996ms for postStartSetup
	I1101 11:14:25.114641  578608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:14:25.114721  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:14:25.133447  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:14:25.243580  578608 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:14:25.248575  578608 fix.go:56] duration metric: took 6.130073006s for fixHost
	I1101 11:14:25.248599  578608 start.go:83] releasing machines lock for "ha-472819-m02", held for 6.130120628s
	I1101 11:14:25.248667  578608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:14:25.266443  578608 ssh_runner.go:195] Run: systemctl --version
	I1101 11:14:25.266491  578608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:14:25.266498  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:14:25.266553  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:14:25.287032  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:14:25.291705  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:14:25.502333  578608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:14:25.606227  578608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:14:25.628864  578608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:14:25.628945  578608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:14:25.643178  578608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 11:14:25.643206  578608 start.go:496] detecting cgroup driver to use...
	I1101 11:14:25.643236  578608 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:14:25.643290  578608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:14:25.679098  578608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:14:25.701349  578608 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:14:25.701429  578608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:14:25.726889  578608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:14:25.750447  578608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:14:26.028197  578608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:14:26.237472  578608 docker.go:234] disabling docker service ...
	I1101 11:14:26.237568  578608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:14:26.267853  578608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:14:26.289560  578608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:14:26.532657  578608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:14:26.840798  578608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:14:26.860218  578608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:14:26.881214  578608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:14:26.881296  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:14:26.897413  578608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:14:26.897496  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:14:26.913979  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:14:26.931226  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:14:26.973043  578608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:14:27.014482  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:14:27.039211  578608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:14:27.051004  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:14:27.064039  578608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:14:27.073194  578608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:14:27.083431  578608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:14:27.327674  578608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:15:57.567300  578608 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.23958392s)
	I1101 11:15:57.567328  578608 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:15:57.567388  578608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:15:57.572404  578608 start.go:564] Will wait 60s for crictl version
	I1101 11:15:57.572469  578608 ssh_runner.go:195] Run: which crictl
	I1101 11:15:57.577631  578608 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:15:57.605139  578608 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:15:57.605224  578608 ssh_runner.go:195] Run: crio --version
	I1101 11:15:57.636111  578608 ssh_runner.go:195] Run: crio --version
	I1101 11:15:57.670269  578608 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:15:57.673357  578608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:15:57.742456  578608 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:15:57.727009791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:15:57.742666  578608 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:15:57.759133  578608 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 11:15:57.763297  578608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:15:57.774026  578608 mustload.go:66] Loading cluster: ha-472819
	I1101 11:15:57.774291  578608 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:15:57.774563  578608 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:15:57.792667  578608 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:15:57.792945  578608 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819 for IP: 192.168.49.3
	I1101 11:15:57.792958  578608 certs.go:195] generating shared ca certs ...
	I1101 11:15:57.792972  578608 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:15:57.793098  578608 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:15:57.793142  578608 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:15:57.793153  578608 certs.go:257] generating profile certs ...
	I1101 11:15:57.793239  578608 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key
	I1101 11:15:57.793282  578608 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.de381785
	I1101 11:15:57.793301  578608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.de381785 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1101 11:15:58.016621  578608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.de381785 ...
	I1101 11:15:58.016711  578608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.de381785: {Name:mk2d59a1374d864d583aeb1b4d2834607e8a941d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:15:58.016963  578608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.de381785 ...
	I1101 11:15:58.017004  578608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.de381785: {Name:mkef14567e1e26b52b0c0d74321135df983bc5eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:15:58.017160  578608 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.de381785 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt
	I1101 11:15:58.017370  578608 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.de381785 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key
	I1101 11:15:58.017575  578608 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key
	I1101 11:15:58.017613  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 11:15:58.017657  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 11:15:58.017721  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 11:15:58.017758  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 11:15:58.017790  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 11:15:58.017833  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 11:15:58.017865  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 11:15:58.017893  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 11:15:58.017992  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:15:58.018067  578608 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:15:58.018106  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:15:58.018156  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:15:58.018245  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:15:58.018300  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:15:58.018402  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:15:58.018462  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /usr/share/ca-certificates/5347202.pem
	I1101 11:15:58.018505  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:15:58.018536  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem -> /usr/share/ca-certificates/534720.pem
	I1101 11:15:58.018637  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:15:58.036051  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:15:58.138116  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 11:15:58.144950  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 11:15:58.159382  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 11:15:58.165144  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 11:15:58.176077  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 11:15:58.181281  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 11:15:58.190992  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 11:15:58.195066  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 11:15:58.206432  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 11:15:58.211100  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 11:15:58.223496  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 11:15:58.227587  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 11:15:58.239706  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:15:58.261594  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:15:58.282102  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:15:58.300233  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:15:58.319100  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1101 11:15:58.347857  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:15:58.368246  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:15:58.386198  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:15:58.404021  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:15:58.423560  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:15:58.447694  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:15:58.472645  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 11:15:58.488302  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 11:15:58.503703  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 11:15:58.517391  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 11:15:58.531377  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 11:15:58.544986  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 11:15:58.557401  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 11:15:58.571225  578608 ssh_runner.go:195] Run: openssl version
	I1101 11:15:58.578046  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:15:58.587494  578608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:15:58.593105  578608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:15:58.593191  578608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:15:58.635310  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:15:58.644323  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:15:58.653119  578608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:15:58.657082  578608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:15:58.657225  578608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:15:58.699842  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:15:58.708321  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:15:58.716729  578608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:15:58.720604  578608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:15:58.720719  578608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:15:58.762170  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:15:58.770567  578608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:15:58.775766  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:15:58.819344  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:15:58.860873  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:15:58.902554  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:15:58.948263  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:15:58.991984  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:15:59.038199  578608 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1101 11:15:59.038367  578608 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-472819-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:15:59.038399  578608 kube-vip.go:115] generating kube-vip config ...
	I1101 11:15:59.038454  578608 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 11:15:59.051443  578608 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:15:59.051569  578608 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 11:15:59.051640  578608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:15:59.060678  578608 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:15:59.060802  578608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 11:15:59.068602  578608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 11:15:59.084599  578608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:15:59.099086  578608 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 11:15:59.115310  578608 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 11:15:59.119660  578608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:15:59.130298  578608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:15:59.285095  578608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:15:59.300318  578608 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:15:59.300683  578608 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:15:59.300729  578608 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:15:59.303514  578608 out.go:179] * Verifying Kubernetes components...
	I1101 11:15:59.305308  578608 out.go:179] * Enabled addons: 
	I1101 11:15:59.307390  578608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:15:59.307455  578608 addons.go:515] duration metric: took 6.717714ms for enable addons: enabled=[]
	I1101 11:15:59.441859  578608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:15:59.456724  578608 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 11:15:59.456809  578608 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 11:15:59.457226  578608 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 11:15:59.457247  578608 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 11:15:59.457270  578608 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 11:15:59.457276  578608 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 11:15:59.457288  578608 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 11:15:59.457539  578608 node_ready.go:35] waiting up to 6m0s for node "ha-472819-m02" to be "Ready" ...
	I1101 11:15:59.457892  578608 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	W1101 11:16:01.461531  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:03.461648  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:05.960880  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:07.965825  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:10.460810  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:12.462022  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:14.963419  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:17.461288  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:19.962801  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:22.460998  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:24.461503  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:26.461549  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:28.960839  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:30.961373  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:32.966968  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:35.461953  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:37.964134  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:40.463126  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:42.962508  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:45.462070  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:47.961551  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:50.462455  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:52.961222  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:54.965866  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:57.461980  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:16:59.962121  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:01.962277  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:04.462246  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:06.961119  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:08.961546  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:11.461389  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:13.961024  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:15.962142  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:17.962524  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:20.461444  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:22.961273  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:24.961928  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:27.461653  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:29.962221  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:32.461591  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:34.960820  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:36.961827  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:39.461406  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:41.961807  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:44.460715  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:46.461377  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:48.961578  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:50.963846  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:53.461370  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:55.960977  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:57.961171  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:17:59.962094  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:02.461729  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:04.962593  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:07.463184  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:09.961653  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:11.963069  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:14.461331  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:16.462170  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:18.961023  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:20.961773  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:23.461510  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:25.961420  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:28.461593  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:30.461974  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:32.961461  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:34.961529  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:36.961645  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:39.461087  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:41.960968  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:43.961199  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:46.461682  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:48.461883  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:50.961442  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:52.961647  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:54.961926  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:57.461293  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:18:59.461774  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:01.961172  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:03.961654  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:05.962064  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:08.461820  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:10.961388  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:12.962132  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:14.962726  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:17.460825  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:19.461042  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:21.462961  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:23.960824  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:25.961509  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:28.461497  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:30.461855  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:32.461984  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:34.962239  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:37.461819  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:39.961778  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:42.461503  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:44.461760  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:46.961743  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:48.961927  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:50.961984  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:52.963933  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:55.460969  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:57.461884  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:19:59.965968  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:01.969049  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:04.461114  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:06.462755  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:08.961378  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:10.962653  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:13.461149  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:15.461981  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:17.462093  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:19.975995  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:22.461878  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:24.961686  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:27.460905  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:29.461459  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:31.962211  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:33.963783  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:36.460918  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:38.460990  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:40.462156  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:42.961857  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:44.962120  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:47.461447  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:49.963034  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:52.461084  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:54.461566  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:56.962030  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:20:59.461959  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:01.961667  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:03.961861  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:06.460921  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:08.461335  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:10.962492  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:13.461754  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:15.961429  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:18.460799  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:20.461135  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:22.461660  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:24.960986  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:26.961142  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:29.461473  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:31.961431  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:34.460746  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:36.461329  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:38.461738  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:40.962944  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:43.461771  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:45.962283  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:48.461779  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:50.961848  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:53.460733  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:55.460884  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	W1101 11:21:57.461124  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
	I1101 11:21:59.457938  578608 node_ready.go:38] duration metric: took 6m0.000371481s for node "ha-472819-m02" to be "Ready" ...
	I1101 11:21:59.461046  578608 out.go:203] 
	W1101 11:21:59.463878  578608 out.go:285] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 11:21:59.463905  578608 out.go:285] * 
	* 
	W1101 11:21:59.471454  578608 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 11:21:59.474312  578608 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1101 11:14:19.063426  578608 out.go:360] Setting OutFile to fd 1 ...
I1101 11:14:19.065069  578608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:14:19.065085  578608 out.go:374] Setting ErrFile to fd 2...
I1101 11:14:19.065090  578608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:14:19.065373  578608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
I1101 11:14:19.065682  578608 mustload.go:66] Loading cluster: ha-472819
I1101 11:14:19.066126  578608 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:14:19.066650  578608 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
W1101 11:14:19.084995  578608 host.go:58] "ha-472819-m02" host status: Stopped
I1101 11:14:19.088166  578608 out.go:179] * Starting "ha-472819-m02" control-plane node in "ha-472819" cluster
I1101 11:14:19.091257  578608 cache.go:124] Beginning downloading kic base image for docker with crio
I1101 11:14:19.093781  578608 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
I1101 11:14:19.097456  578608 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 11:14:19.097529  578608 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
I1101 11:14:19.097582  578608 cache.go:59] Caching tarball of preloaded images
I1101 11:14:19.097667  578608 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
I1101 11:14:19.097735  578608 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
I1101 11:14:19.097882  578608 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1101 11:14:19.098053  578608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
I1101 11:14:19.118256  578608 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
I1101 11:14:19.118281  578608 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
I1101 11:14:19.118299  578608 cache.go:233] Successfully downloaded all kic artifacts
I1101 11:14:19.118321  578608 start.go:360] acquireMachinesLock for ha-472819-m02: {Name:mkd9b09c2f5958eb6cf9785ab2b809fc6e14102e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 11:14:19.118458  578608 start.go:364] duration metric: took 50.175µs to acquireMachinesLock for "ha-472819-m02"
I1101 11:14:19.118488  578608 start.go:96] Skipping create...Using existing machine configuration
I1101 11:14:19.118494  578608 fix.go:54] fixHost starting: m02
I1101 11:14:19.118842  578608 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
I1101 11:14:19.136992  578608 fix.go:112] recreateIfNeeded on ha-472819-m02: state=Stopped err=<nil>
W1101 11:14:19.137019  578608 fix.go:138] unexpected machine state, will restart: <nil>
I1101 11:14:19.141672  578608 out.go:252] * Restarting existing docker container for "ha-472819-m02" ...
I1101 11:14:19.141898  578608 cli_runner.go:164] Run: docker start ha-472819-m02
I1101 11:14:19.447546  578608 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
I1101 11:14:19.469678  578608 kic.go:430] container "ha-472819-m02" state is running.
I1101 11:14:19.470098  578608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
I1101 11:14:19.496990  578608 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
I1101 11:14:19.497255  578608 machine.go:94] provisionDockerMachine start ...
I1101 11:14:19.497327  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
I1101 11:14:19.522101  578608 main.go:143] libmachine: Using SSH client type: native
I1101 11:14:19.522419  578608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
I1101 11:14:19.522431  578608 main.go:143] libmachine: About to run SSH command:
hostname
I1101 11:14:19.522988  578608 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33628->127.0.0.1:33530: read: connection reset by peer
I1101 11:14:22.726329  578608 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m02

                                                
                                                
I1101 11:14:22.726353  578608 ubuntu.go:182] provisioning hostname "ha-472819-m02"
I1101 11:14:22.726429  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
I1101 11:14:22.747804  578608 main.go:143] libmachine: Using SSH client type: native
I1101 11:14:22.748117  578608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
I1101 11:14:22.748136  578608 main.go:143] libmachine: About to run SSH command:
sudo hostname ha-472819-m02 && echo "ha-472819-m02" | sudo tee /etc/hostname
I1101 11:14:22.974004  578608 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m02

                                                
                                                
I1101 11:14:22.974165  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
I1101 11:14:23.009894  578608 main.go:143] libmachine: Using SSH client type: native
I1101 11:14:23.010196  578608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
I1101 11:14:23.010213  578608 main.go:143] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-472819-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472819-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-472819-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I1101 11:14:23.203282  578608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
I1101 11:14:23.203309  578608 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
I1101 11:14:23.203349  578608 ubuntu.go:190] setting up certificates
I1101 11:14:23.203368  578608 provision.go:84] configureAuth start
I1101 11:14:23.203470  578608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
I1101 11:14:23.229781  578608 provision.go:143] copyHostCerts
I1101 11:14:23.229820  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
I1101 11:14:23.229856  578608 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
I1101 11:14:23.229869  578608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
I1101 11:14:23.229940  578608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
I1101 11:14:23.230038  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
I1101 11:14:23.230055  578608 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
I1101 11:14:23.230060  578608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
I1101 11:14:23.230086  578608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
I1101 11:14:23.230134  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
I1101 11:14:23.230149  578608 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
I1101 11:14:23.230153  578608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
I1101 11:14:23.230194  578608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
I1101 11:14:23.230260  578608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.ha-472819-m02 san=[127.0.0.1 192.168.49.3 ha-472819-m02 localhost minikube]
I1101 11:14:23.696986  578608 provision.go:177] copyRemoteCerts
I1101 11:14:23.697050  578608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 11:14:23.697090  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
I1101 11:14:23.715985  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
I1101 11:14:23.829102  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1101 11:14:23.829166  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1101 11:14:23.853759  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem -> /etc/docker/server.pem
I1101 11:14:23.853826  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1101 11:14:23.876820  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1101 11:14:23.876882  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1101 11:14:23.898148  578608 provision.go:87] duration metric: took 694.758613ms to configureAuth
I1101 11:14:23.898175  578608 ubuntu.go:206] setting minikube options for container-runtime
I1101 11:14:23.898416  578608 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:14:23.898529  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
I1101 11:14:23.934362  578608 main.go:143] libmachine: Using SSH client type: native
I1101 11:14:23.934677  578608 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
I1101 11:14:23.934698  578608 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1101 11:14:24.939753  578608 main.go:143] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

                                                
                                                
I1101 11:14:24.939789  578608 machine.go:97] duration metric: took 5.442523797s to provisionDockerMachine
I1101 11:14:24.939801  578608 start.go:293] postStartSetup for "ha-472819-m02" (driver="docker")
I1101 11:14:24.939811  578608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 11:14:24.939891  578608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 11:14:24.939949  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
I1101 11:14:24.960944  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
I1101 11:14:25.074594  578608 ssh_runner.go:195] Run: cat /etc/os-release
I1101 11:14:25.078639  578608 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1101 11:14:25.078670  578608 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1101 11:14:25.078683  578608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
I1101 11:14:25.078749  578608 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
I1101 11:14:25.078838  578608 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
I1101 11:14:25.078852  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /etc/ssl/certs/5347202.pem
I1101 11:14:25.079008  578608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 11:14:25.090682  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
I1101 11:14:25.114535  578608 start.go:296] duration metric: took 174.718996ms for postStartSetup
I1101 11:14:25.114641  578608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1101 11:14:25.114721  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
I1101 11:14:25.133447  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
I1101 11:14:25.243580  578608 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1101 11:14:25.248575  578608 fix.go:56] duration metric: took 6.130073006s for fixHost
I1101 11:14:25.248599  578608 start.go:83] releasing machines lock for "ha-472819-m02", held for 6.130120628s
I1101 11:14:25.248667  578608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
I1101 11:14:25.266443  578608 ssh_runner.go:195] Run: systemctl --version
I1101 11:14:25.266491  578608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 11:14:25.266498  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
I1101 11:14:25.266553  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
I1101 11:14:25.287032  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
I1101 11:14:25.291705  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
I1101 11:14:25.502333  578608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1101 11:14:25.606227  578608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1101 11:14:25.628864  578608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1101 11:14:25.628945  578608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1101 11:14:25.643178  578608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1101 11:14:25.643206  578608 start.go:496] detecting cgroup driver to use...
I1101 11:14:25.643236  578608 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1101 11:14:25.643290  578608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1101 11:14:25.679098  578608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1101 11:14:25.701349  578608 docker.go:218] disabling cri-docker service (if available) ...
I1101 11:14:25.701429  578608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1101 11:14:25.726889  578608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1101 11:14:25.750447  578608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1101 11:14:26.028197  578608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1101 11:14:26.237472  578608 docker.go:234] disabling docker service ...
I1101 11:14:26.237568  578608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1101 11:14:26.267853  578608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1101 11:14:26.289560  578608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1101 11:14:26.532657  578608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1101 11:14:26.840798  578608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1101 11:14:26.860218  578608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1101 11:14:26.881214  578608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1101 11:14:26.881296  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 11:14:26.897413  578608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1101 11:14:26.897496  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 11:14:26.913979  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1101 11:14:26.931226  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1101 11:14:26.973043  578608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1101 11:14:27.014482  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1101 11:14:27.039211  578608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1101 11:14:27.051004  578608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1101 11:14:27.064039  578608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 11:14:27.073194  578608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 11:14:27.083431  578608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 11:14:27.327674  578608 ssh_runner.go:195] Run: sudo systemctl restart crio
I1101 11:15:57.567300  578608 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.23958392s)
I1101 11:15:57.567328  578608 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1101 11:15:57.567388  578608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1101 11:15:57.572404  578608 start.go:564] Will wait 60s for crictl version
I1101 11:15:57.572469  578608 ssh_runner.go:195] Run: which crictl
I1101 11:15:57.577631  578608 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1101 11:15:57.605139  578608 start.go:580] Version:  0.1.0
RuntimeName:  cri-o
RuntimeVersion:  1.34.1
RuntimeApiVersion:  v1
I1101 11:15:57.605224  578608 ssh_runner.go:195] Run: crio --version
I1101 11:15:57.636111  578608 ssh_runner.go:195] Run: crio --version
I1101 11:15:57.670269  578608 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
I1101 11:15:57.673357  578608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 11:15:57.742456  578608 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:15:57.727009791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1101 11:15:57.742666  578608 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 11:15:57.759133  578608 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
I1101 11:15:57.763297  578608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 11:15:57.774026  578608 mustload.go:66] Loading cluster: ha-472819
I1101 11:15:57.774291  578608 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:15:57.774563  578608 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
I1101 11:15:57.792667  578608 host.go:66] Checking if "ha-472819" exists ...
I1101 11:15:57.792945  578608 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819 for IP: 192.168.49.3
I1101 11:15:57.792958  578608 certs.go:195] generating shared ca certs ...
I1101 11:15:57.792972  578608 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 11:15:57.793098  578608 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
I1101 11:15:57.793142  578608 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
I1101 11:15:57.793153  578608 certs.go:257] generating profile certs ...
I1101 11:15:57.793239  578608 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key
I1101 11:15:57.793282  578608 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.de381785
I1101 11:15:57.793301  578608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.de381785 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
I1101 11:15:58.016621  578608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.de381785 ...
I1101 11:15:58.016711  578608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.de381785: {Name:mk2d59a1374d864d583aeb1b4d2834607e8a941d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 11:15:58.016963  578608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.de381785 ...
I1101 11:15:58.017004  578608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.de381785: {Name:mkef14567e1e26b52b0c0d74321135df983bc5eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 11:15:58.017160  578608 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.de381785 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt
I1101 11:15:58.017370  578608 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.de381785 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key
I1101 11:15:58.017575  578608 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key
I1101 11:15:58.017613  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1101 11:15:58.017657  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1101 11:15:58.017721  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1101 11:15:58.017758  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1101 11:15:58.017790  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1101 11:15:58.017833  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1101 11:15:58.017865  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1101 11:15:58.017893  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1101 11:15:58.017992  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
W1101 11:15:58.018067  578608 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
I1101 11:15:58.018106  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
I1101 11:15:58.018156  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
I1101 11:15:58.018245  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
I1101 11:15:58.018300  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
I1101 11:15:58.018402  578608 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
I1101 11:15:58.018462  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /usr/share/ca-certificates/5347202.pem
I1101 11:15:58.018505  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1101 11:15:58.018536  578608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem -> /usr/share/ca-certificates/534720.pem
I1101 11:15:58.018637  578608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
I1101 11:15:58.036051  578608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
I1101 11:15:58.138116  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I1101 11:15:58.144950  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I1101 11:15:58.159382  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I1101 11:15:58.165144  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
I1101 11:15:58.176077  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I1101 11:15:58.181281  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I1101 11:15:58.190992  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I1101 11:15:58.195066  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
I1101 11:15:58.206432  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I1101 11:15:58.211100  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I1101 11:15:58.223496  578608 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I1101 11:15:58.227587  578608 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
I1101 11:15:58.239706  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 11:15:58.261594  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1101 11:15:58.282102  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 11:15:58.300233  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1101 11:15:58.319100  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
I1101 11:15:58.347857  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1101 11:15:58.368246  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 11:15:58.386198  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 11:15:58.404021  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
I1101 11:15:58.423560  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 11:15:58.447694  578608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
I1101 11:15:58.472645  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I1101 11:15:58.488302  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
I1101 11:15:58.503703  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I1101 11:15:58.517391  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
I1101 11:15:58.531377  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I1101 11:15:58.544986  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
I1101 11:15:58.557401  578608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I1101 11:15:58.571225  578608 ssh_runner.go:195] Run: openssl version
I1101 11:15:58.578046  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
I1101 11:15:58.587494  578608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
I1101 11:15:58.593105  578608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
I1101 11:15:58.593191  578608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
I1101 11:15:58.635310  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
I1101 11:15:58.644323  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 11:15:58.653119  578608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 11:15:58.657082  578608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
I1101 11:15:58.657225  578608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 11:15:58.699842  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 11:15:58.708321  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
I1101 11:15:58.716729  578608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
I1101 11:15:58.720604  578608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
I1101 11:15:58.720719  578608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
I1101 11:15:58.762170  578608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
I1101 11:15:58.770567  578608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1101 11:15:58.775766  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1101 11:15:58.819344  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1101 11:15:58.860873  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1101 11:15:58.902554  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1101 11:15:58.948263  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1101 11:15:58.991984  578608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1101 11:15:59.038199  578608 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
I1101 11:15:59.038367  578608 kubeadm.go:947] kubelet [Unit]
Wants=crio.service

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-472819-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1101 11:15:59.038399  578608 kube-vip.go:115] generating kube-vip config ...
I1101 11:15:59.038454  578608 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
I1101 11:15:59.051443  578608 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
stdout:

                                                
                                                
stderr:
I1101 11:15:59.051569  578608 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.49.254
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v1.0.1
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I1101 11:15:59.051640  578608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1101 11:15:59.060678  578608 binaries.go:44] Found k8s binaries, skipping transfer
I1101 11:15:59.060802  578608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I1101 11:15:59.068602  578608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I1101 11:15:59.084599  578608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 11:15:59.099086  578608 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
I1101 11:15:59.115310  578608 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
I1101 11:15:59.119660  578608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 11:15:59.130298  578608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 11:15:59.285095  578608 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1101 11:15:59.300318  578608 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1101 11:15:59.300683  578608 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:15:59.300729  578608 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1101 11:15:59.303514  578608 out.go:179] * Verifying Kubernetes components...
I1101 11:15:59.305308  578608 out.go:179] * Enabled addons: 
I1101 11:15:59.307390  578608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 11:15:59.307455  578608 addons.go:515] duration metric: took 6.717714ms for enable addons: enabled=[]
I1101 11:15:59.441859  578608 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1101 11:15:59.456724  578608 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W1101 11:15:59.456809  578608 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
I1101 11:15:59.457226  578608 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1101 11:15:59.457247  578608 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1101 11:15:59.457270  578608 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1101 11:15:59.457276  578608 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1101 11:15:59.457288  578608 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1101 11:15:59.457539  578608 node_ready.go:35] waiting up to 6m0s for node "ha-472819-m02" to be "Ready" ...
I1101 11:15:59.457892  578608 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
W1101 11:16:01.461531  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:03.461648  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:05.960880  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:07.965825  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:10.460810  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:12.462022  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:14.963419  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:17.461288  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:19.962801  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:22.460998  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:24.461503  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:26.461549  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:28.960839  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:30.961373  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:32.966968  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:35.461953  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:37.964134  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:40.463126  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:42.962508  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:45.462070  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:47.961551  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:50.462455  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:52.961222  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:54.965866  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:57.461980  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:16:59.962121  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:01.962277  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:04.462246  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:06.961119  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:08.961546  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:11.461389  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:13.961024  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:15.962142  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:17.962524  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:20.461444  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:22.961273  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:24.961928  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:27.461653  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:29.962221  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:32.461591  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:34.960820  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:36.961827  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:39.461406  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:41.961807  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:44.460715  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:46.461377  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:48.961578  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:50.963846  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:53.461370  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:55.960977  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:57.961171  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:17:59.962094  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:02.461729  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:04.962593  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:07.463184  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:09.961653  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:11.963069  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:14.461331  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:16.462170  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:18.961023  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:20.961773  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:23.461510  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:25.961420  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:28.461593  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:30.461974  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:32.961461  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:34.961529  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:36.961645  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:39.461087  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:41.960968  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:43.961199  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:46.461682  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:48.461883  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:50.961442  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:52.961647  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:54.961926  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:57.461293  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:18:59.461774  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:01.961172  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:03.961654  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:05.962064  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:08.461820  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:10.961388  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:12.962132  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:14.962726  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:17.460825  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:19.461042  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:21.462961  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:23.960824  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:25.961509  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:28.461497  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:30.461855  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:32.461984  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:34.962239  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:37.461819  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:39.961778  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:42.461503  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:44.461760  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:46.961743  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:48.961927  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:50.961984  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:52.963933  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:55.460969  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:57.461884  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:19:59.965968  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:01.969049  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:04.461114  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:06.462755  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:08.961378  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:10.962653  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:13.461149  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:15.461981  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:17.462093  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:19.975995  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:22.461878  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:24.961686  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:27.460905  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:29.461459  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:31.962211  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:33.963783  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:36.460918  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:38.460990  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:40.462156  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:42.961857  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:44.962120  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:47.461447  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:49.963034  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:52.461084  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:54.461566  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:56.962030  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:20:59.461959  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:01.961667  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:03.961861  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:06.460921  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:08.461335  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:10.962492  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:13.461754  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:15.961429  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:18.460799  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:20.461135  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:22.461660  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:24.960986  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:26.961142  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:29.461473  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:31.961431  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:34.460746  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:36.461329  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:38.461738  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:40.962944  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:43.461771  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:45.962283  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:48.461779  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:50.961848  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:53.460733  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:55.460884  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
W1101 11:21:57.461124  578608 node_ready.go:57] node "ha-472819-m02" has "Ready":"Unknown" status (will retry)
I1101 11:21:59.457938  578608 node_ready.go:38] duration metric: took 6m0.000371481s for node "ha-472819-m02" to be "Ready" ...
I1101 11:21:59.461046  578608 out.go:203] 
W1101 11:21:59.463878  578608 out.go:285] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
W1101 11:21:59.463905  578608 out.go:285] * 
* 
W1101 11:21:59.471454  578608 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1101 11:21:59.474312  578608 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-arm64 -p ha-472819 node start m02 --alsologtostderr -v 5": exit status 80
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 2 (1.340246878s)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-472819-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:21:59.594717  580561 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:21:59.594852  580561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:21:59.594863  580561 out.go:374] Setting ErrFile to fd 2...
	I1101 11:21:59.594868  580561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:21:59.595281  580561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:21:59.595616  580561 out.go:368] Setting JSON to false
	I1101 11:21:59.595776  580561 notify.go:221] Checking for updates...
	I1101 11:21:59.596431  580561 mustload.go:66] Loading cluster: ha-472819
	I1101 11:21:59.596894  580561 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:21:59.596915  580561 status.go:174] checking status of ha-472819 ...
	I1101 11:21:59.597535  580561 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:21:59.622151  580561 status.go:371] ha-472819 host status = "Running" (err=<nil>)
	I1101 11:21:59.622177  580561 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:21:59.622590  580561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:21:59.650169  580561 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:21:59.650489  580561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:21:59.650578  580561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:21:59.671859  580561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:21:59.787758  580561 ssh_runner.go:195] Run: systemctl --version
	I1101 11:21:59.795773  580561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:21:59.810625  580561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:21:59.879584  580561 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:21:59.868298828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:21:59.882442  580561 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:21:59.882763  580561 api_server.go:166] Checking apiserver status ...
	I1101 11:21:59.882886  580561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:21:59.895817  580561 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 11:21:59.904549  580561 api_server.go:182] apiserver freezer: "10:freezer:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8"
	I1101 11:21:59.904628  580561 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8/freezer.state
	I1101 11:21:59.914023  580561 api_server.go:204] freezer state: "THAWED"
	I1101 11:21:59.914056  580561 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:21:59.926415  580561 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:21:59.926464  580561 status.go:463] ha-472819 apiserver status = Running (err=<nil>)
	I1101 11:21:59.926492  580561 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:21:59.926527  580561 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:21:59.926900  580561 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:21:59.955546  580561 status.go:371] ha-472819-m02 host status = "Running" (err=<nil>)
	I1101 11:21:59.955575  580561 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:21:59.955862  580561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:21:59.976367  580561 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:21:59.976791  580561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:21:59.976865  580561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:21:59.995044  580561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:22:00.352031  580561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:00.391877  580561 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:00.391913  580561 api_server.go:166] Checking apiserver status ...
	I1101 11:22:00.391973  580561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:22:00.407376  580561 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:22:00.407409  580561 status.go:463] ha-472819-m02 apiserver status = Running (err=<nil>)
	I1101 11:22:00.407420  580561 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:00.407438  580561 status.go:174] checking status of ha-472819-m03 ...
	I1101 11:22:00.407772  580561 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:22:00.441635  580561 status.go:371] ha-472819-m03 host status = "Running" (err=<nil>)
	I1101 11:22:00.441679  580561 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:00.442094  580561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:22:00.464404  580561 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:00.464764  580561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:00.464812  580561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:22:00.497925  580561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:22:00.612265  580561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:00.627668  580561 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:00.627700  580561 api_server.go:166] Checking apiserver status ...
	I1101 11:22:00.627743  580561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:00.640360  580561 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1209/cgroup
	I1101 11:22:00.649161  580561 api_server.go:182] apiserver freezer: "10:freezer:/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d"
	I1101 11:22:00.649239  580561 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d/freezer.state
	I1101 11:22:00.657618  580561 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:00.657723  580561 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:00.666173  580561 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:00.666207  580561 status.go:463] ha-472819-m03 apiserver status = Running (err=<nil>)
	I1101 11:22:00.666224  580561 status.go:176] ha-472819-m03 status: &{Name:ha-472819-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:00.666246  580561 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:22:00.666571  580561 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:22:00.685474  580561 status.go:371] ha-472819-m04 host status = "Running" (err=<nil>)
	I1101 11:22:00.685505  580561 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:00.685841  580561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m04
	I1101 11:22:00.710228  580561 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:00.710644  580561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:00.710697  580561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m04
	I1101 11:22:00.730133  580561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m04/id_rsa Username:docker}
	I1101 11:22:00.835098  580561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:00.850317  580561 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1101 11:22:00.858731  534720 retry.go:31] will retry after 1.256169376s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 2 (1.00074215s)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-472819-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:22:02.177887  580750 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:22:02.178202  580750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:02.178218  580750 out.go:374] Setting ErrFile to fd 2...
	I1101 11:22:02.178224  580750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:02.178535  580750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:22:02.178799  580750 out.go:368] Setting JSON to false
	I1101 11:22:02.178835  580750 mustload.go:66] Loading cluster: ha-472819
	I1101 11:22:02.178861  580750 notify.go:221] Checking for updates...
	I1101 11:22:02.179344  580750 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:22:02.179388  580750 status.go:174] checking status of ha-472819 ...
	I1101 11:22:02.180593  580750 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:22:02.200689  580750 status.go:371] ha-472819 host status = "Running" (err=<nil>)
	I1101 11:22:02.200710  580750 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:02.201077  580750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:22:02.226848  580750 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:02.227243  580750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:02.227296  580750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:22:02.253827  580750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:22:02.359488  580750 ssh_runner.go:195] Run: systemctl --version
	I1101 11:22:02.366254  580750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:02.382888  580750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:22:02.465209  580750 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:22:02.454903777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:22:02.465949  580750 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:02.465988  580750 api_server.go:166] Checking apiserver status ...
	I1101 11:22:02.466050  580750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:02.479918  580750 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 11:22:02.488770  580750 api_server.go:182] apiserver freezer: "10:freezer:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8"
	I1101 11:22:02.488840  580750 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8/freezer.state
	I1101 11:22:02.496695  580750 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:02.496725  580750 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:02.507414  580750 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:02.507442  580750 status.go:463] ha-472819 apiserver status = Running (err=<nil>)
	I1101 11:22:02.507453  580750 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:02.507497  580750 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:22:02.507804  580750 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:22:02.527491  580750 status.go:371] ha-472819-m02 host status = "Running" (err=<nil>)
	I1101 11:22:02.527511  580750 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:02.527827  580750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:22:02.545772  580750 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:02.546110  580750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:02.546155  580750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:22:02.563850  580750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:22:02.671169  580750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:02.688459  580750 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:02.688485  580750 api_server.go:166] Checking apiserver status ...
	I1101 11:22:02.688537  580750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:22:02.699792  580750 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:22:02.699858  580750 status.go:463] ha-472819-m02 apiserver status = Running (err=<nil>)
	I1101 11:22:02.699896  580750 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:02.699939  580750 status.go:174] checking status of ha-472819-m03 ...
	I1101 11:22:02.700317  580750 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:22:02.726972  580750 status.go:371] ha-472819-m03 host status = "Running" (err=<nil>)
	I1101 11:22:02.726998  580750 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:02.727469  580750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:22:02.746156  580750 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:02.746467  580750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:02.746513  580750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:22:02.764594  580750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:22:02.867767  580750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:02.881307  580750 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:02.881380  580750 api_server.go:166] Checking apiserver status ...
	I1101 11:22:02.881459  580750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:02.893137  580750 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1209/cgroup
	I1101 11:22:02.902504  580750 api_server.go:182] apiserver freezer: "10:freezer:/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d"
	I1101 11:22:02.902605  580750 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d/freezer.state
	I1101 11:22:02.910995  580750 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:02.911024  580750 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:02.919736  580750 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:02.919817  580750 status.go:463] ha-472819-m03 apiserver status = Running (err=<nil>)
	I1101 11:22:02.919842  580750 status.go:176] ha-472819-m03 status: &{Name:ha-472819-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:02.919883  580750 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:22:02.920249  580750 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:22:02.947229  580750 status.go:371] ha-472819-m04 host status = "Running" (err=<nil>)
	I1101 11:22:02.947250  580750 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:02.947572  580750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m04
	I1101 11:22:02.968261  580750 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:02.968577  580750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:02.968628  580750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m04
	I1101 11:22:02.990360  580750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m04/id_rsa Username:docker}
	I1101 11:22:03.095906  580750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:03.109877  580750 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1101 11:22:03.116226  534720 retry.go:31] will retry after 1.941114907s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 2 (990.240092ms)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-472819-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:22:05.113259  580930 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:22:05.113444  580930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:05.113472  580930 out.go:374] Setting ErrFile to fd 2...
	I1101 11:22:05.113490  580930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:05.113823  580930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:22:05.114072  580930 out.go:368] Setting JSON to false
	I1101 11:22:05.114146  580930 mustload.go:66] Loading cluster: ha-472819
	I1101 11:22:05.114304  580930 notify.go:221] Checking for updates...
	I1101 11:22:05.114688  580930 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:22:05.114740  580930 status.go:174] checking status of ha-472819 ...
	I1101 11:22:05.115679  580930 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:22:05.137206  580930 status.go:371] ha-472819 host status = "Running" (err=<nil>)
	I1101 11:22:05.137254  580930 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:05.137673  580930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:22:05.171183  580930 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:05.171489  580930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:05.171538  580930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:22:05.191807  580930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:22:05.299995  580930 ssh_runner.go:195] Run: systemctl --version
	I1101 11:22:05.307079  580930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:05.320419  580930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:22:05.389509  580930 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:22:05.378100936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:22:05.390279  580930 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:05.390318  580930 api_server.go:166] Checking apiserver status ...
	I1101 11:22:05.390362  580930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:05.403040  580930 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 11:22:05.411874  580930 api_server.go:182] apiserver freezer: "10:freezer:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8"
	I1101 11:22:05.411952  580930 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8/freezer.state
	I1101 11:22:05.425076  580930 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:05.425108  580930 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:05.433833  580930 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:05.433864  580930 status.go:463] ha-472819 apiserver status = Running (err=<nil>)
	I1101 11:22:05.433876  580930 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:05.433894  580930 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:22:05.434220  580930 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:22:05.452341  580930 status.go:371] ha-472819-m02 host status = "Running" (err=<nil>)
	I1101 11:22:05.452377  580930 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:05.452680  580930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:22:05.472260  580930 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:05.472575  580930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:05.472621  580930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:22:05.492517  580930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:22:05.600529  580930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:05.616654  580930 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:05.616686  580930 api_server.go:166] Checking apiserver status ...
	I1101 11:22:05.616734  580930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:22:05.627420  580930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:22:05.627446  580930 status.go:463] ha-472819-m02 apiserver status = Running (err=<nil>)
	I1101 11:22:05.627456  580930 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:05.627473  580930 status.go:174] checking status of ha-472819-m03 ...
	I1101 11:22:05.627782  580930 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:22:05.645948  580930 status.go:371] ha-472819-m03 host status = "Running" (err=<nil>)
	I1101 11:22:05.645987  580930 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:05.646304  580930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:22:05.664279  580930 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:05.664585  580930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:05.664631  580930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:22:05.684404  580930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:22:05.792152  580930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:05.807125  580930 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:05.807151  580930 api_server.go:166] Checking apiserver status ...
	I1101 11:22:05.807195  580930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:05.819258  580930 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1209/cgroup
	I1101 11:22:05.829599  580930 api_server.go:182] apiserver freezer: "10:freezer:/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d"
	I1101 11:22:05.829674  580930 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d/freezer.state
	I1101 11:22:05.837812  580930 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:05.837853  580930 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:05.846328  580930 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:05.846359  580930 status.go:463] ha-472819-m03 apiserver status = Running (err=<nil>)
	I1101 11:22:05.846369  580930 status.go:176] ha-472819-m03 status: &{Name:ha-472819-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:05.846387  580930 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:22:05.846704  580930 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:22:05.874465  580930 status.go:371] ha-472819-m04 host status = "Running" (err=<nil>)
	I1101 11:22:05.874495  580930 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:05.874823  580930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m04
	I1101 11:22:05.896005  580930 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:05.896410  580930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:05.896472  580930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m04
	I1101 11:22:05.916698  580930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m04/id_rsa Username:docker}
	I1101 11:22:06.025247  580930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:06.039493  580930 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1101 11:22:06.048710  534720 retry.go:31] will retry after 1.774699772s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 2 (990.7979ms)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-472819-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:22:07.884651  581112 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:22:07.885220  581112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:07.885231  581112 out.go:374] Setting ErrFile to fd 2...
	I1101 11:22:07.885236  581112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:07.885515  581112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:22:07.885766  581112 out.go:368] Setting JSON to false
	I1101 11:22:07.885791  581112 mustload.go:66] Loading cluster: ha-472819
	I1101 11:22:07.886194  581112 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:22:07.886213  581112 status.go:174] checking status of ha-472819 ...
	I1101 11:22:07.886710  581112 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:22:07.886993  581112 notify.go:221] Checking for updates...
	I1101 11:22:07.908502  581112 status.go:371] ha-472819 host status = "Running" (err=<nil>)
	I1101 11:22:07.908530  581112 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:07.908945  581112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:22:07.948349  581112 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:07.948654  581112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:07.948707  581112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:22:07.971801  581112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:22:08.082403  581112 ssh_runner.go:195] Run: systemctl --version
	I1101 11:22:08.089471  581112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:08.104250  581112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:22:08.171087  581112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:22:08.15984743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:22:08.172213  581112 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:08.172250  581112 api_server.go:166] Checking apiserver status ...
	I1101 11:22:08.172297  581112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:08.184602  581112 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 11:22:08.193333  581112 api_server.go:182] apiserver freezer: "10:freezer:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8"
	I1101 11:22:08.193399  581112 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8/freezer.state
	I1101 11:22:08.203061  581112 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:08.203085  581112 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:08.211583  581112 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:08.211612  581112 status.go:463] ha-472819 apiserver status = Running (err=<nil>)
	I1101 11:22:08.211624  581112 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:08.211681  581112 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:22:08.212072  581112 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:22:08.229756  581112 status.go:371] ha-472819-m02 host status = "Running" (err=<nil>)
	I1101 11:22:08.229785  581112 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:08.230165  581112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:22:08.247132  581112 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:08.247452  581112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:08.247498  581112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:22:08.266241  581112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:22:08.378985  581112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:08.393085  581112 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:08.393117  581112 api_server.go:166] Checking apiserver status ...
	I1101 11:22:08.393157  581112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:22:08.403465  581112 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:22:08.403486  581112 status.go:463] ha-472819-m02 apiserver status = Running (err=<nil>)
	I1101 11:22:08.403495  581112 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:08.403510  581112 status.go:174] checking status of ha-472819-m03 ...
	I1101 11:22:08.403832  581112 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:22:08.428293  581112 status.go:371] ha-472819-m03 host status = "Running" (err=<nil>)
	I1101 11:22:08.428327  581112 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:08.428652  581112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:22:08.446785  581112 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:08.447107  581112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:08.447148  581112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:22:08.467149  581112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:22:08.571403  581112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:08.585647  581112 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:08.585679  581112 api_server.go:166] Checking apiserver status ...
	I1101 11:22:08.585765  581112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:08.599073  581112 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1209/cgroup
	I1101 11:22:08.609570  581112 api_server.go:182] apiserver freezer: "10:freezer:/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d"
	I1101 11:22:08.609649  581112 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d/freezer.state
	I1101 11:22:08.617753  581112 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:08.617829  581112 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:08.626732  581112 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:08.626761  581112 status.go:463] ha-472819-m03 apiserver status = Running (err=<nil>)
	I1101 11:22:08.626771  581112 status.go:176] ha-472819-m03 status: &{Name:ha-472819-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:08.626789  581112 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:22:08.627115  581112 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:22:08.645777  581112 status.go:371] ha-472819-m04 host status = "Running" (err=<nil>)
	I1101 11:22:08.645819  581112 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:08.646298  581112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m04
	I1101 11:22:08.664938  581112 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:08.665381  581112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:08.665433  581112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m04
	I1101 11:22:08.685011  581112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m04/id_rsa Username:docker}
	I1101 11:22:08.791320  581112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:08.808561  581112 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1101 11:22:08.815110  534720 retry.go:31] will retry after 4.787737497s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 2 (1.043564687s)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-472819-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:22:13.661271  581298 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:22:13.661547  581298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:13.661559  581298 out.go:374] Setting ErrFile to fd 2...
	I1101 11:22:13.661565  581298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:13.661929  581298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:22:13.662209  581298 out.go:368] Setting JSON to false
	I1101 11:22:13.662280  581298 notify.go:221] Checking for updates...
	I1101 11:22:13.662248  581298 mustload.go:66] Loading cluster: ha-472819
	I1101 11:22:13.663468  581298 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:22:13.663515  581298 status.go:174] checking status of ha-472819 ...
	I1101 11:22:13.664393  581298 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:22:13.696511  581298 status.go:371] ha-472819 host status = "Running" (err=<nil>)
	I1101 11:22:13.696539  581298 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:13.696855  581298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:22:13.724420  581298 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:13.724909  581298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:13.724954  581298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:22:13.746274  581298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:22:13.855863  581298 ssh_runner.go:195] Run: systemctl --version
	I1101 11:22:13.864649  581298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:13.884450  581298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:22:13.969556  581298 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:22:13.95706142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:22:13.970704  581298 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:13.970739  581298 api_server.go:166] Checking apiserver status ...
	I1101 11:22:13.970810  581298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:13.986945  581298 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 11:22:14.002235  581298 api_server.go:182] apiserver freezer: "10:freezer:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8"
	I1101 11:22:14.002319  581298 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8/freezer.state
	I1101 11:22:14.019209  581298 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:14.019249  581298 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:14.027753  581298 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:14.027803  581298 status.go:463] ha-472819 apiserver status = Running (err=<nil>)
	I1101 11:22:14.027816  581298 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:14.027870  581298 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:22:14.028216  581298 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:22:14.047880  581298 status.go:371] ha-472819-m02 host status = "Running" (err=<nil>)
	I1101 11:22:14.047904  581298 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:14.048282  581298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:22:14.069377  581298 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:14.069819  581298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:14.069866  581298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:22:14.095092  581298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:22:14.205919  581298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:14.220778  581298 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:14.220810  581298 api_server.go:166] Checking apiserver status ...
	I1101 11:22:14.220856  581298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:22:14.231108  581298 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:22:14.231132  581298 status.go:463] ha-472819-m02 apiserver status = Running (err=<nil>)
	I1101 11:22:14.231143  581298 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:14.231159  581298 status.go:174] checking status of ha-472819-m03 ...
	I1101 11:22:14.231495  581298 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:22:14.249759  581298 status.go:371] ha-472819-m03 host status = "Running" (err=<nil>)
	I1101 11:22:14.249790  581298 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:14.250105  581298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:22:14.269321  581298 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:14.269635  581298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:14.269790  581298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:22:14.288668  581298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:22:14.391345  581298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:14.406749  581298 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:14.406779  581298 api_server.go:166] Checking apiserver status ...
	I1101 11:22:14.406820  581298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:14.426006  581298 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1209/cgroup
	I1101 11:22:14.435375  581298 api_server.go:182] apiserver freezer: "10:freezer:/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d"
	I1101 11:22:14.435453  581298 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d/freezer.state
	I1101 11:22:14.443608  581298 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:14.443635  581298 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:14.452089  581298 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:14.452116  581298 status.go:463] ha-472819-m03 apiserver status = Running (err=<nil>)
	I1101 11:22:14.452125  581298 status.go:176] ha-472819-m03 status: &{Name:ha-472819-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:14.452169  581298 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:22:14.452496  581298 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:22:14.471939  581298 status.go:371] ha-472819-m04 host status = "Running" (err=<nil>)
	I1101 11:22:14.471966  581298 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:14.472263  581298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m04
	I1101 11:22:14.490063  581298 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:14.490380  581298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:14.490445  581298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m04
	I1101 11:22:14.518298  581298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m04/id_rsa Username:docker}
	I1101 11:22:14.627719  581298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:14.642497  581298 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1101 11:22:14.649671  534720 retry.go:31] will retry after 7.529398965s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 2 (974.276716ms)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-472819-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:22:22.226954  581491 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:22:22.227121  581491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:22.227132  581491 out.go:374] Setting ErrFile to fd 2...
	I1101 11:22:22.227137  581491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:22.227371  581491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:22:22.227621  581491 out.go:368] Setting JSON to false
	I1101 11:22:22.227654  581491 mustload.go:66] Loading cluster: ha-472819
	I1101 11:22:22.227766  581491 notify.go:221] Checking for updates...
	I1101 11:22:22.228081  581491 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:22:22.228102  581491 status.go:174] checking status of ha-472819 ...
	I1101 11:22:22.228970  581491 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:22:22.249318  581491 status.go:371] ha-472819 host status = "Running" (err=<nil>)
	I1101 11:22:22.249344  581491 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:22.249632  581491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:22:22.274825  581491 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:22.275126  581491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:22.275177  581491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:22:22.296845  581491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:22:22.404385  581491 ssh_runner.go:195] Run: systemctl --version
	I1101 11:22:22.412591  581491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:22.431607  581491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:22:22.506121  581491 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:22:22.496367339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:22:22.506672  581491 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:22.506704  581491 api_server.go:166] Checking apiserver status ...
	I1101 11:22:22.506758  581491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:22.521170  581491 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 11:22:22.530057  581491 api_server.go:182] apiserver freezer: "10:freezer:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8"
	I1101 11:22:22.530141  581491 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8/freezer.state
	I1101 11:22:22.538430  581491 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:22.538460  581491 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:22.547062  581491 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:22.547092  581491 status.go:463] ha-472819 apiserver status = Running (err=<nil>)
	I1101 11:22:22.547121  581491 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:22.547146  581491 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:22:22.547483  581491 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:22:22.571113  581491 status.go:371] ha-472819-m02 host status = "Running" (err=<nil>)
	I1101 11:22:22.571139  581491 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:22.571436  581491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:22:22.589395  581491 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:22.589846  581491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:22.589900  581491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:22:22.608439  581491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:22:22.711206  581491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:22.726109  581491 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:22.726135  581491 api_server.go:166] Checking apiserver status ...
	I1101 11:22:22.726178  581491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:22:22.737345  581491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:22:22.737366  581491 status.go:463] ha-472819-m02 apiserver status = Running (err=<nil>)
	I1101 11:22:22.737375  581491 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:22.737391  581491 status.go:174] checking status of ha-472819-m03 ...
	I1101 11:22:22.737680  581491 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:22:22.755304  581491 status.go:371] ha-472819-m03 host status = "Running" (err=<nil>)
	I1101 11:22:22.755337  581491 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:22.755627  581491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:22:22.775086  581491 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:22.775418  581491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:22.775470  581491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:22:22.794195  581491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:22:22.895259  581491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:22.909083  581491 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:22.909115  581491 api_server.go:166] Checking apiserver status ...
	I1101 11:22:22.909158  581491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:22.921086  581491 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1209/cgroup
	I1101 11:22:22.937561  581491 api_server.go:182] apiserver freezer: "10:freezer:/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d"
	I1101 11:22:22.937671  581491 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d/freezer.state
	I1101 11:22:22.950650  581491 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:22.950685  581491 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:22.959208  581491 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:22.959237  581491 status.go:463] ha-472819-m03 apiserver status = Running (err=<nil>)
	I1101 11:22:22.959248  581491 status.go:176] ha-472819-m03 status: &{Name:ha-472819-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:22.959264  581491 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:22:22.959566  581491 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:22:22.980711  581491 status.go:371] ha-472819-m04 host status = "Running" (err=<nil>)
	I1101 11:22:22.980738  581491 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:22.981018  581491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m04
	I1101 11:22:23.000725  581491 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:23.001055  581491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:23.001107  581491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m04
	I1101 11:22:23.020126  581491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m04/id_rsa Username:docker}
	I1101 11:22:23.127490  581491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:23.149574  581491 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1101 11:22:23.156570  534720 retry.go:31] will retry after 4.433715033s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 2 (1.002210223s)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-472819-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:22:27.637872  581673 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:22:27.638107  581673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:27.638136  581673 out.go:374] Setting ErrFile to fd 2...
	I1101 11:22:27.638155  581673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:27.638454  581673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:22:27.638679  581673 out.go:368] Setting JSON to false
	I1101 11:22:27.638734  581673 mustload.go:66] Loading cluster: ha-472819
	I1101 11:22:27.638759  581673 notify.go:221] Checking for updates...
	I1101 11:22:27.639187  581673 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:22:27.639222  581673 status.go:174] checking status of ha-472819 ...
	I1101 11:22:27.639855  581673 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:22:27.666305  581673 status.go:371] ha-472819 host status = "Running" (err=<nil>)
	I1101 11:22:27.666327  581673 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:27.666637  581673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:22:27.693131  581673 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:27.693459  581673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:27.693509  581673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:22:27.719097  581673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:22:27.839227  581673 ssh_runner.go:195] Run: systemctl --version
	I1101 11:22:27.846454  581673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:27.859534  581673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:22:27.955865  581673 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:22:27.946291065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:22:27.956411  581673 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:27.956454  581673 api_server.go:166] Checking apiserver status ...
	I1101 11:22:27.956518  581673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:27.969829  581673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 11:22:27.979263  581673 api_server.go:182] apiserver freezer: "10:freezer:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8"
	I1101 11:22:27.979351  581673 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8/freezer.state
	I1101 11:22:27.987968  581673 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:27.987997  581673 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:27.996311  581673 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:27.996337  581673 status.go:463] ha-472819 apiserver status = Running (err=<nil>)
	I1101 11:22:27.996348  581673 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:27.996364  581673 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:22:27.996655  581673 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:22:28.017510  581673 status.go:371] ha-472819-m02 host status = "Running" (err=<nil>)
	I1101 11:22:28.017538  581673 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:28.017941  581673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:22:28.038771  581673 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:28.039098  581673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:28.039153  581673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:22:28.058253  581673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:22:28.163941  581673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:28.179479  581673 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:28.179510  581673 api_server.go:166] Checking apiserver status ...
	I1101 11:22:28.179552  581673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:22:28.190152  581673 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:22:28.190175  581673 status.go:463] ha-472819-m02 apiserver status = Running (err=<nil>)
	I1101 11:22:28.190184  581673 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:28.190240  581673 status.go:174] checking status of ha-472819-m03 ...
	I1101 11:22:28.190586  581673 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:22:28.207962  581673 status.go:371] ha-472819-m03 host status = "Running" (err=<nil>)
	I1101 11:22:28.208003  581673 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:28.208331  581673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:22:28.225963  581673 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:28.226291  581673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:28.226336  581673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:22:28.243762  581673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:22:28.347724  581673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:28.363233  581673 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:28.363271  581673 api_server.go:166] Checking apiserver status ...
	I1101 11:22:28.363314  581673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:28.377078  581673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1209/cgroup
	I1101 11:22:28.386720  581673 api_server.go:182] apiserver freezer: "10:freezer:/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d"
	I1101 11:22:28.386802  581673 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d/freezer.state
	I1101 11:22:28.395303  581673 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:28.395344  581673 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:28.403960  581673 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:28.403989  581673 status.go:463] ha-472819-m03 apiserver status = Running (err=<nil>)
	I1101 11:22:28.404000  581673 status.go:176] ha-472819-m03 status: &{Name:ha-472819-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:28.404019  581673 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:22:28.404375  581673 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:22:28.423738  581673 status.go:371] ha-472819-m04 host status = "Running" (err=<nil>)
	I1101 11:22:28.423767  581673 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:28.424091  581673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m04
	I1101 11:22:28.443686  581673 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:28.443993  581673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:28.444036  581673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m04
	I1101 11:22:28.465624  581673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m04/id_rsa Username:docker}
	I1101 11:22:28.571587  581673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:28.584897  581673 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1101 11:22:28.593615  534720 retry.go:31] will retry after 12.7130759s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 2 (1.053842277s)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-472819-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:22:41.365303  581862 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:22:41.365508  581862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:41.365520  581862 out.go:374] Setting ErrFile to fd 2...
	I1101 11:22:41.365525  581862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:22:41.365823  581862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:22:41.366061  581862 out.go:368] Setting JSON to false
	I1101 11:22:41.366096  581862 mustload.go:66] Loading cluster: ha-472819
	I1101 11:22:41.366138  581862 notify.go:221] Checking for updates...
	I1101 11:22:41.366492  581862 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:22:41.366510  581862 status.go:174] checking status of ha-472819 ...
	I1101 11:22:41.367350  581862 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:22:41.391932  581862 status.go:371] ha-472819 host status = "Running" (err=<nil>)
	I1101 11:22:41.391954  581862 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:41.392409  581862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:22:41.421958  581862 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:22:41.422348  581862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:41.422396  581862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:22:41.442910  581862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:22:41.552018  581862 ssh_runner.go:195] Run: systemctl --version
	I1101 11:22:41.558882  581862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:41.572953  581862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:22:41.646130  581862 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-11-01 11:22:41.632240346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:22:41.646743  581862 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:41.646776  581862 api_server.go:166] Checking apiserver status ...
	I1101 11:22:41.646818  581862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:41.659313  581862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 11:22:41.668109  581862 api_server.go:182] apiserver freezer: "10:freezer:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8"
	I1101 11:22:41.668184  581862 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8/freezer.state
	I1101 11:22:41.675847  581862 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:41.675876  581862 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:41.685111  581862 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:41.685140  581862 status.go:463] ha-472819 apiserver status = Running (err=<nil>)
	I1101 11:22:41.685153  581862 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:41.685170  581862 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:22:41.685512  581862 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:22:41.706335  581862 status.go:371] ha-472819-m02 host status = "Running" (err=<nil>)
	I1101 11:22:41.706361  581862 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:41.706667  581862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:22:41.723707  581862 host.go:66] Checking if "ha-472819-m02" exists ...
	I1101 11:22:41.724073  581862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:41.724152  581862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:22:41.741879  581862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:22:41.852504  581862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:41.870682  581862 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:41.870708  581862 api_server.go:166] Checking apiserver status ...
	I1101 11:22:41.870754  581862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:22:41.884768  581862 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:22:41.884793  581862 status.go:463] ha-472819-m02 apiserver status = Running (err=<nil>)
	I1101 11:22:41.884804  581862 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:41.884819  581862 status.go:174] checking status of ha-472819-m03 ...
	I1101 11:22:41.885129  581862 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:22:41.903389  581862 status.go:371] ha-472819-m03 host status = "Running" (err=<nil>)
	I1101 11:22:41.903423  581862 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:41.903736  581862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:22:41.931958  581862 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:22:41.932321  581862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:41.932370  581862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:22:41.956029  581862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:22:42.076649  581862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:42.097424  581862 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:22:42.097454  581862 api_server.go:166] Checking apiserver status ...
	I1101 11:22:42.097504  581862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:22:42.113987  581862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1209/cgroup
	I1101 11:22:42.128743  581862 api_server.go:182] apiserver freezer: "10:freezer:/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d"
	I1101 11:22:42.128848  581862 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d/freezer.state
	I1101 11:22:42.147402  581862 api_server.go:204] freezer state: "THAWED"
	I1101 11:22:42.147434  581862 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:22:42.158904  581862 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:22:42.158939  581862 status.go:463] ha-472819-m03 apiserver status = Running (err=<nil>)
	I1101 11:22:42.158951  581862 status.go:176] ha-472819-m03 status: &{Name:ha-472819-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:22:42.158970  581862 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:22:42.159332  581862 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:22:42.186692  581862 status.go:371] ha-472819-m04 host status = "Running" (err=<nil>)
	I1101 11:22:42.186719  581862 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:42.187076  581862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m04
	I1101 11:22:42.210727  581862 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:22:42.211173  581862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:22:42.211238  581862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m04
	I1101 11:22:42.233360  581862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m04/id_rsa Username:docker}
	I1101 11:22:42.344054  581862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:22:42.357506  581862 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472819
helpers_test.go:243: (dbg) docker inspect ha-472819:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c",
	        "Created": "2025-11-01T11:09:20.899997169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564549,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:09:20.960423395Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/hostname",
	        "HostsPath": "/var/lib/docker/containers/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/hosts",
	        "LogPath": "/var/lib/docker/containers/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c-json.log",
	        "Name": "/ha-472819",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472819:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-472819",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c",
	                "LowerDir": "/var/lib/docker/overlay2/b2b4ec64838dd5e359c9159df7be29d4c92c2974901ee3965fdfb4d3899d9b98-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2b4ec64838dd5e359c9159df7be29d4c92c2974901ee3965fdfb4d3899d9b98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2b4ec64838dd5e359c9159df7be29d4c92c2974901ee3965fdfb4d3899d9b98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2b4ec64838dd5e359c9159df7be29d4c92c2974901ee3965fdfb4d3899d9b98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472819",
	                "Source": "/var/lib/docker/volumes/ha-472819/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472819",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472819",
	                "name.minikube.sigs.k8s.io": "ha-472819",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d06c75db1f88cbad3b99e1d3febd830132bbd4294bd314a091e234e9ed41115",
	            "SandboxKey": "/var/run/docker/netns/2d06c75db1f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472819": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:12:7f:3f:18:1d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fad877b9a6cbf2fecd3371f8a88631aadb56e394476f97473ad152037f12fe08",
	                    "EndpointID": "3d75015284989adc37a7194f7d4e42693d55ecd110cde90b4ea89049faa60f3e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472819",
	                        "66de5fe90fef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-472819 -n ha-472819
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 logs -n 25: (1.483304093s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ cp      │ ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt ha-472819:/home/docker/cp-test_ha-472819-m03_ha-472819.txt                       │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819 sudo cat /home/docker/cp-test_ha-472819-m03_ha-472819.txt                                                 │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ cp      │ ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt ha-472819-m02:/home/docker/cp-test_ha-472819-m03_ha-472819-m02.txt               │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m02 sudo cat /home/docker/cp-test_ha-472819-m03_ha-472819-m02.txt                                         │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ cp      │ ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt ha-472819-m04:/home/docker/cp-test_ha-472819-m03_ha-472819-m04.txt               │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test_ha-472819-m03_ha-472819-m04.txt                                         │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp testdata/cp-test.txt ha-472819-m04:/home/docker/cp-test.txt                                                             │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3224874569/001/cp-test_ha-472819-m04.txt │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt ha-472819:/home/docker/cp-test_ha-472819-m04_ha-472819.txt                       │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819 sudo cat /home/docker/cp-test_ha-472819-m04_ha-472819.txt                                                 │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt ha-472819-m02:/home/docker/cp-test_ha-472819-m04_ha-472819-m02.txt               │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m02 sudo cat /home/docker/cp-test_ha-472819-m04_ha-472819-m02.txt                                         │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt ha-472819-m03:/home/docker/cp-test_ha-472819-m04_ha-472819-m03.txt               │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test_ha-472819-m04_ha-472819-m03.txt                                         │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ node    │ ha-472819 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ node    │ ha-472819 node start m02 --alsologtostderr -v 5                                                                                      │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:09:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:09:15.424948  564163 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:09:15.425098  564163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:09:15.425112  564163 out.go:374] Setting ErrFile to fd 2...
	I1101 11:09:15.425118  564163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:09:15.425408  564163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:09:15.425949  564163 out.go:368] Setting JSON to false
	I1101 11:09:15.426851  564163 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10305,"bootTime":1761985051,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:09:15.426921  564163 start.go:143] virtualization:  
	I1101 11:09:15.433995  564163 out.go:179] * [ha-472819] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:09:15.437863  564163 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:09:15.437980  564163 notify.go:221] Checking for updates...
	I1101 11:09:15.444935  564163 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:09:15.448333  564163 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:09:15.451645  564163 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:09:15.454845  564163 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:09:15.458142  564163 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:09:15.461449  564163 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:09:15.483754  564163 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:09:15.483880  564163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:09:15.548573  564163 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 11:09:15.539247568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:09:15.548693  564163 docker.go:319] overlay module found
	I1101 11:09:15.552124  564163 out.go:179] * Using the docker driver based on user configuration
	I1101 11:09:15.555278  564163 start.go:309] selected driver: docker
	I1101 11:09:15.555310  564163 start.go:930] validating driver "docker" against <nil>
	I1101 11:09:15.555327  564163 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:09:15.556133  564163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:09:15.623357  564163 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 11:09:15.613378806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:09:15.623518  564163 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 11:09:15.623751  564163 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:09:15.626927  564163 out.go:179] * Using Docker driver with root privileges
	I1101 11:09:15.629780  564163 cni.go:84] Creating CNI manager for ""
	I1101 11:09:15.629849  564163 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1101 11:09:15.629863  564163 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 11:09:15.629952  564163 start.go:353] cluster config:
	{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1101 11:09:15.634889  564163 out.go:179] * Starting "ha-472819" primary control-plane node in "ha-472819" cluster
	I1101 11:09:15.637899  564163 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:09:15.640856  564163 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:09:15.643596  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:15.643660  564163 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 11:09:15.643675  564163 cache.go:59] Caching tarball of preloaded images
	I1101 11:09:15.643690  564163 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:09:15.643766  564163 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:09:15.643778  564163 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:09:15.644121  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:15.644152  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json: {Name:mk1ba5f23dfb700a1a8e1eba67301a5ea1e7302e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:15.663055  564163 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:09:15.663081  564163 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:09:15.663095  564163 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:09:15.663121  564163 start.go:360] acquireMachinesLock for ha-472819: {Name:mke8efbc22a0e700799c27ca313f26b1261a26ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:09:15.663233  564163 start.go:364] duration metric: took 92.735µs to acquireMachinesLock for "ha-472819"
	I1101 11:09:15.663263  564163 start.go:93] Provisioning new machine with config: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:09:15.663334  564163 start.go:125] createHost starting for "" (driver="docker")
	I1101 11:09:15.666819  564163 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 11:09:15.667057  564163 start.go:159] libmachine.API.Create for "ha-472819" (driver="docker")
	I1101 11:09:15.667098  564163 client.go:173] LocalClient.Create starting
	I1101 11:09:15.667168  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 11:09:15.667207  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:09:15.667225  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:09:15.667289  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 11:09:15.667320  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:09:15.667334  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:09:15.667716  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 11:09:15.683755  564163 cli_runner.go:211] docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 11:09:15.683839  564163 network_create.go:284] running [docker network inspect ha-472819] to gather additional debugging logs...
	I1101 11:09:15.683861  564163 cli_runner.go:164] Run: docker network inspect ha-472819
	W1101 11:09:15.699243  564163 cli_runner.go:211] docker network inspect ha-472819 returned with exit code 1
	I1101 11:09:15.699272  564163 network_create.go:287] error running [docker network inspect ha-472819]: docker network inspect ha-472819: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472819 not found
	I1101 11:09:15.699286  564163 network_create.go:289] output of [docker network inspect ha-472819]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472819 not found
	
	** /stderr **
	I1101 11:09:15.699396  564163 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:09:15.715873  564163 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191bd20}
	I1101 11:09:15.715912  564163 network_create.go:124] attempt to create docker network ha-472819 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 11:09:15.715972  564163 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-472819 ha-472819
	I1101 11:09:15.771899  564163 network_create.go:108] docker network ha-472819 192.168.49.0/24 created
	I1101 11:09:15.771935  564163 kic.go:121] calculated static IP "192.168.49.2" for the "ha-472819" container
	I1101 11:09:15.772049  564163 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 11:09:15.788100  564163 cli_runner.go:164] Run: docker volume create ha-472819 --label name.minikube.sigs.k8s.io=ha-472819 --label created_by.minikube.sigs.k8s.io=true
	I1101 11:09:15.806575  564163 oci.go:103] Successfully created a docker volume ha-472819
	I1101 11:09:15.806667  564163 cli_runner.go:164] Run: docker run --rm --name ha-472819-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819 --entrypoint /usr/bin/test -v ha-472819:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 11:09:16.355704  564163 oci.go:107] Successfully prepared a docker volume ha-472819
	I1101 11:09:16.355755  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:16.355776  564163 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 11:09:16.355856  564163 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 11:09:20.827013  564163 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.471121984s)
	I1101 11:09:20.827045  564163 kic.go:203] duration metric: took 4.471266092s to extract preloaded images to volume ...
	W1101 11:09:20.827184  564163 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 11:09:20.827293  564163 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 11:09:20.885378  564163 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472819 --name ha-472819 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472819 --network ha-472819 --ip 192.168.49.2 --volume ha-472819:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 11:09:21.160814  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Running}}
	I1101 11:09:21.184777  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:21.209851  564163 cli_runner.go:164] Run: docker exec ha-472819 stat /var/lib/dpkg/alternatives/iptables
	I1101 11:09:21.261057  564163 oci.go:144] the created container "ha-472819" has a running status.
	I1101 11:09:21.261091  564163 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa...
	I1101 11:09:21.772918  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 11:09:21.772974  564163 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 11:09:21.803408  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:21.833811  564163 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 11:09:21.833836  564163 kic_runner.go:114] Args: [docker exec --privileged ha-472819 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 11:09:21.893208  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:21.920802  564163 machine.go:94] provisionDockerMachine start ...
	I1101 11:09:21.920914  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:21.951309  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:21.951650  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1101 11:09:21.951667  564163 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:09:22.133550  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819
	
	I1101 11:09:22.133576  564163 ubuntu.go:182] provisioning hostname "ha-472819"
	I1101 11:09:22.133648  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:22.153259  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:22.153580  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1101 11:09:22.153595  564163 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-472819 && echo "ha-472819" | sudo tee /etc/hostname
	I1101 11:09:22.321278  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819
	
	I1101 11:09:22.321359  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:22.340531  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:22.340844  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1101 11:09:22.340864  564163 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472819/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:09:22.493831  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:09:22.493861  564163 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:09:22.493890  564163 ubuntu.go:190] setting up certificates
	I1101 11:09:22.493900  564163 provision.go:84] configureAuth start
	I1101 11:09:22.493964  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:09:22.511064  564163 provision.go:143] copyHostCerts
	I1101 11:09:22.511110  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:09:22.511144  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:09:22.511156  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:09:22.511234  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:09:22.511330  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:09:22.511353  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:09:22.511363  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:09:22.511399  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:09:22.511453  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:09:22.511474  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:09:22.511481  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:09:22.511507  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:09:22.511573  564163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.ha-472819 san=[127.0.0.1 192.168.49.2 ha-472819 localhost minikube]
	I1101 11:09:23.107819  564163 provision.go:177] copyRemoteCerts
	I1101 11:09:23.107890  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:09:23.107933  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.125061  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:23.229470  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 11:09:23.229557  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:09:23.247096  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 11:09:23.247159  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1101 11:09:23.264591  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 11:09:23.264659  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 11:09:23.282625  564163 provision.go:87] duration metric: took 788.694673ms to configureAuth
	I1101 11:09:23.282653  564163 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:09:23.282872  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:23.282984  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.300232  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:23.300543  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1101 11:09:23.300570  564163 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:09:23.560046  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:09:23.560073  564163 machine.go:97] duration metric: took 1.639247203s to provisionDockerMachine
	I1101 11:09:23.560084  564163 client.go:176] duration metric: took 7.892975355s to LocalClient.Create
	I1101 11:09:23.560098  564163 start.go:167] duration metric: took 7.893042884s to libmachine.API.Create "ha-472819"
	I1101 11:09:23.560105  564163 start.go:293] postStartSetup for "ha-472819" (driver="docker")
	I1101 11:09:23.560115  564163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:09:23.560191  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:09:23.560242  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.577371  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:23.681857  564163 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:09:23.685148  564163 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:09:23.685219  564163 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:09:23.685238  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:09:23.685303  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:09:23.685388  564163 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:09:23.685400  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /etc/ssl/certs/5347202.pem
	I1101 11:09:23.685527  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:09:23.692811  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:09:23.710406  564163 start.go:296] duration metric: took 150.285888ms for postStartSetup
	I1101 11:09:23.710773  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:09:23.727204  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:23.727491  564163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:09:23.727555  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.744722  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:23.847065  564163 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:09:23.851940  564163 start.go:128] duration metric: took 8.188589867s to createHost
	I1101 11:09:23.851964  564163 start.go:83] releasing machines lock for "ha-472819", held for 8.188717483s
	I1101 11:09:23.852042  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:09:23.868795  564163 ssh_runner.go:195] Run: cat /version.json
	I1101 11:09:23.868846  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.869109  564163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:09:23.869169  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.887234  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:23.888188  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:24.071485  564163 ssh_runner.go:195] Run: systemctl --version
	I1101 11:09:24.078091  564163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:09:24.116837  564163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:09:24.121129  564163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:09:24.121205  564163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:09:24.150584  564163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 11:09:24.150648  564163 start.go:496] detecting cgroup driver to use...
	I1101 11:09:24.150698  564163 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:09:24.150758  564163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:09:24.168976  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:09:24.181753  564163 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:09:24.181845  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:09:24.198141  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:09:24.216855  564163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:09:24.341682  564163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:09:24.476025  564163 docker.go:234] disabling docker service ...
	I1101 11:09:24.476133  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:09:24.497860  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:09:24.511210  564163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:09:24.628871  564163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:09:24.751818  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:09:24.765436  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:09:24.779636  564163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:09:24.779754  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.788816  564163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:09:24.788939  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.797807  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.806546  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.815271  564163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:09:24.823860  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.832662  564163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.846440  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.855087  564163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:09:24.862971  564163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:09:24.870471  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:09:24.977089  564163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:09:25.118783  564163 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:09:25.118897  564163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:09:25.122853  564163 start.go:564] Will wait 60s for crictl version
	I1101 11:09:25.122960  564163 ssh_runner.go:195] Run: which crictl
	I1101 11:09:25.126642  564163 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:09:25.155501  564163 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:09:25.155650  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:09:25.191623  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:09:25.225393  564163 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:09:25.228286  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:09:25.249910  564163 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 11:09:25.253806  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:09:25.264172  564163 kubeadm.go:884] updating cluster {Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:09:25.264297  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:25.264354  564163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:09:25.296986  564163 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:09:25.297011  564163 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:09:25.297070  564163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:09:25.321789  564163 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:09:25.321813  564163 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:09:25.321821  564163 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 11:09:25.321912  564163 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-472819 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:09:25.321999  564163 ssh_runner.go:195] Run: crio config
	I1101 11:09:25.380528  564163 cni.go:84] Creating CNI manager for ""
	I1101 11:09:25.380597  564163 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1101 11:09:25.380638  564163 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:09:25.380700  564163 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472819 NodeName:ha-472819 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:09:25.380877  564163 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-472819"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:09:25.380929  564163 kube-vip.go:115] generating kube-vip config ...
	I1101 11:09:25.381014  564163 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 11:09:25.393108  564163 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:09:25.393218  564163 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1101 11:09:25.393286  564163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:09:25.401262  564163 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:09:25.401379  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1101 11:09:25.409393  564163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1101 11:09:25.422954  564163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:09:25.436690  564163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1101 11:09:25.449600  564163 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1101 11:09:25.462548  564163 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 11:09:25.466270  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:09:25.475862  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:09:25.601826  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:09:25.617309  564163 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819 for IP: 192.168.49.2
	I1101 11:09:25.617340  564163 certs.go:195] generating shared ca certs ...
	I1101 11:09:25.617373  564163 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:25.617566  564163 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:09:25.617633  564163 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:09:25.617663  564163 certs.go:257] generating profile certs ...
	I1101 11:09:25.617789  564163 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key
	I1101 11:09:25.617814  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt with IP's: []
	I1101 11:09:25.970419  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt ...
	I1101 11:09:25.970452  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt: {Name:mk2f41d01137bc613681198561d475471e9b313e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:25.970692  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key ...
	I1101 11:09:25.970711  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key: {Name:mkb2f404cf11e9ff6d4974de312113eaa2c2831e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:25.970817  564163 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.985e35c4
	I1101 11:09:25.970839  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.985e35c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1101 11:09:26.521666  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.985e35c4 ...
	I1101 11:09:26.521708  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.985e35c4: {Name:mkb42da9edce8a3a5d96bc6e579423c0b2c406c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:26.521948  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.985e35c4 ...
	I1101 11:09:26.521970  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.985e35c4: {Name:mkdb988833aef0d64a2a617d4983ef55d86bf204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:26.522099  564163 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.985e35c4 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt
	I1101 11:09:26.522188  564163 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.985e35c4 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key
	I1101 11:09:26.522253  564163 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key
	I1101 11:09:26.522271  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt with IP's: []
	I1101 11:09:26.790462  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt ...
	I1101 11:09:26.790530  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt: {Name:mk06b0a4635c2902eec5ac65c88e17411a71c735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:26.790721  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key ...
	I1101 11:09:26.790734  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key: {Name:mkba7a7f05214b82bdfe102379d16ce7f31a4fa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:26.790827  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 11:09:26.790855  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 11:09:26.790874  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 11:09:26.790886  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 11:09:26.790901  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 11:09:26.790913  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 11:09:26.790928  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 11:09:26.790938  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 11:09:26.790995  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:09:26.791032  564163 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:09:26.791044  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:09:26.791066  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:09:26.791099  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:09:26.791128  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:09:26.791177  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:09:26.791207  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:09:26.791225  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem -> /usr/share/ca-certificates/534720.pem
	I1101 11:09:26.791237  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /usr/share/ca-certificates/5347202.pem
	I1101 11:09:26.791808  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:09:26.811389  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:09:26.830648  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:09:26.848734  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:09:26.866857  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 11:09:26.884898  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:09:26.902647  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:09:26.923006  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:09:26.940928  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:09:26.958978  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:09:26.977277  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:09:26.995250  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:09:27.009661  564163 ssh_runner.go:195] Run: openssl version
	I1101 11:09:27.016856  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:09:27.026824  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:09:27.030812  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:09:27.030886  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:09:27.071933  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:09:27.080875  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:09:27.089342  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:09:27.093244  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:09:27.093330  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:09:27.135042  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:09:27.143864  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:09:27.152568  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:09:27.157253  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:09:27.157336  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:09:27.203904  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:09:27.213041  564163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:09:27.216525  564163 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:09:27.216581  564163 kubeadm.go:401] StartCluster: {Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:09:27.216654  564163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:09:27.216709  564163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:09:27.244145  564163 cri.go:89] found id: ""
	I1101 11:09:27.244250  564163 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:09:27.252322  564163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:09:27.260600  564163 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 11:09:27.260684  564163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:09:27.268298  564163 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:09:27.268321  564163 kubeadm.go:158] found existing configuration files:
	
	I1101 11:09:27.268382  564163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:09:27.276280  564163 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:09:27.276376  564163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:09:27.283648  564163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:09:27.291436  564163 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:09:27.291503  564163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:09:27.298997  564163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:09:27.306741  564163 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:09:27.306823  564163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:09:27.314212  564163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:09:27.322170  564163 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:09:27.322240  564163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:09:27.329643  564163 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 11:09:27.373103  564163 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 11:09:27.373161  564163 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 11:09:27.398676  564163 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 11:09:27.398752  564163 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 11:09:27.398789  564163 kubeadm.go:319] OS: Linux
	I1101 11:09:27.398836  564163 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 11:09:27.398892  564163 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 11:09:27.398942  564163 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 11:09:27.398993  564163 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 11:09:27.399043  564163 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 11:09:27.399093  564163 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 11:09:27.399142  564163 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 11:09:27.399193  564163 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 11:09:27.399241  564163 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 11:09:27.468424  564163 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 11:09:27.468540  564163 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 11:09:27.468642  564163 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 11:09:27.479009  564163 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 11:09:27.485386  564163 out.go:252]   - Generating certificates and keys ...
	I1101 11:09:27.485566  564163 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 11:09:27.485677  564163 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 11:09:28.146819  564163 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 11:09:29.237838  564163 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 11:09:29.375619  564163 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 11:09:29.669873  564163 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 11:09:30.116711  564163 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 11:09:30.116850  564163 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [ha-472819 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 11:09:30.327488  564163 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 11:09:30.327663  564163 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [ha-472819 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 11:09:30.738429  564163 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 11:09:30.840698  564163 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 11:09:31.445175  564163 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 11:09:31.445453  564163 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 11:09:31.867483  564163 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 11:09:32.130887  564163 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 11:09:32.613555  564163 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 11:09:33.124550  564163 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 11:09:33.371630  564163 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 11:09:33.372352  564163 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 11:09:33.374952  564163 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 11:09:33.378411  564163 out.go:252]   - Booting up control plane ...
	I1101 11:09:33.378530  564163 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 11:09:33.378616  564163 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 11:09:33.379704  564163 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 11:09:33.398248  564163 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 11:09:33.398586  564163 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 11:09:33.406765  564163 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 11:09:33.407161  564163 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 11:09:33.407368  564163 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 11:09:33.546255  564163 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 11:09:33.546390  564163 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 11:09:34.539783  564163 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000858349s
	I1101 11:09:34.543235  564163 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 11:09:34.543333  564163 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 11:09:34.543583  564163 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 11:09:34.543675  564163 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 11:09:37.986079  564163 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.442309072s
	I1101 11:09:39.112325  564163 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.569041285s
	I1101 11:09:41.044837  564163 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501498887s
	I1101 11:09:41.064487  564163 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 11:09:41.079898  564163 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 11:09:41.101331  564163 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 11:09:41.101559  564163 kubeadm.go:319] [mark-control-plane] Marking the node ha-472819 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 11:09:41.119154  564163 kubeadm.go:319] [bootstrap-token] Using token: btb653.26s7hd24i40lgq1y
	I1101 11:09:41.122135  564163 out.go:252]   - Configuring RBAC rules ...
	I1101 11:09:41.122261  564163 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 11:09:41.131456  564163 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 11:09:41.147503  564163 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 11:09:41.154473  564163 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 11:09:41.161216  564163 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 11:09:41.167042  564163 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 11:09:41.452979  564163 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 11:09:41.879928  564163 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 11:09:42.452390  564163 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 11:09:42.453615  564163 kubeadm.go:319] 
	I1101 11:09:42.453715  564163 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 11:09:42.453728  564163 kubeadm.go:319] 
	I1101 11:09:42.453805  564163 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 11:09:42.453812  564163 kubeadm.go:319] 
	I1101 11:09:42.453838  564163 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 11:09:42.453902  564163 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 11:09:42.453956  564163 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 11:09:42.453964  564163 kubeadm.go:319] 
	I1101 11:09:42.454018  564163 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 11:09:42.454026  564163 kubeadm.go:319] 
	I1101 11:09:42.454074  564163 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 11:09:42.454082  564163 kubeadm.go:319] 
	I1101 11:09:42.454134  564163 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 11:09:42.454210  564163 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 11:09:42.454282  564163 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 11:09:42.454290  564163 kubeadm.go:319] 
	I1101 11:09:42.454374  564163 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 11:09:42.454453  564163 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 11:09:42.454464  564163 kubeadm.go:319] 
	I1101 11:09:42.454806  564163 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token btb653.26s7hd24i40lgq1y \
	I1101 11:09:42.454943  564163 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 11:09:42.454976  564163 kubeadm.go:319] 	--control-plane 
	I1101 11:09:42.454986  564163 kubeadm.go:319] 
	I1101 11:09:42.455099  564163 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 11:09:42.455108  564163 kubeadm.go:319] 
	I1101 11:09:42.455198  564163 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token btb653.26s7hd24i40lgq1y \
	I1101 11:09:42.455312  564163 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 11:09:42.459886  564163 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 11:09:42.460131  564163 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 11:09:42.460249  564163 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 11:09:42.460270  564163 cni.go:84] Creating CNI manager for ""
	I1101 11:09:42.460281  564163 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1101 11:09:42.463474  564163 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 11:09:42.466360  564163 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 11:09:42.471008  564163 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 11:09:42.471032  564163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 11:09:42.485436  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 11:09:42.779960  564163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:09:42.780051  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:42.780097  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472819 minikube.k8s.io/updated_at=2025_11_01T11_09_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=ha-472819 minikube.k8s.io/primary=true
	I1101 11:09:42.946730  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:42.946790  564163 ops.go:34] apiserver oom_adj: -16
	I1101 11:09:43.447305  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:43.947632  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:44.447625  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:44.946864  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:45.447661  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:45.947118  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:46.446849  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:46.947087  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:47.057163  564163 kubeadm.go:1114] duration metric: took 4.277184864s to wait for elevateKubeSystemPrivileges
	I1101 11:09:47.057195  564163 kubeadm.go:403] duration metric: took 19.840620587s to StartCluster
	I1101 11:09:47.057212  564163 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:47.057272  564163 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:09:47.057994  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:47.058207  564163 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:09:47.058246  564163 start.go:242] waiting for startup goroutines ...
	I1101 11:09:47.058255  564163 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:09:47.058318  564163 addons.go:70] Setting storage-provisioner=true in profile "ha-472819"
	I1101 11:09:47.058338  564163 addons.go:239] Setting addon storage-provisioner=true in "ha-472819"
	I1101 11:09:47.058365  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:09:47.058832  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:47.058997  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 11:09:47.059257  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:47.059298  564163 addons.go:70] Setting default-storageclass=true in profile "ha-472819"
	I1101 11:09:47.059315  564163 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "ha-472819"
	I1101 11:09:47.059542  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:47.094113  564163 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:09:47.094672  564163 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 11:09:47.094686  564163 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 11:09:47.094691  564163 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 11:09:47.094696  564163 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 11:09:47.094700  564163 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 11:09:47.095052  564163 addons.go:239] Setting addon default-storageclass=true in "ha-472819"
	I1101 11:09:47.099545  564163 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 11:09:47.099640  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:09:47.100113  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:47.107324  564163 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:09:47.110259  564163 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:09:47.110280  564163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:09:47.110343  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:47.139417  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:47.148358  564163 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:09:47.148379  564163 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:09:47.148446  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:47.178909  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:47.268521  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 11:09:47.290532  564163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:09:47.506464  564163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:09:47.743529  564163 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 11:09:47.978148  564163 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 11:09:47.981058  564163 addons.go:515] duration metric: took 922.780782ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 11:09:47.981103  564163 start.go:247] waiting for cluster config update ...
	I1101 11:09:47.981118  564163 start.go:256] writing updated cluster config ...
	I1101 11:09:47.984275  564163 out.go:203] 
	I1101 11:09:47.987383  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:47.987471  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:47.990706  564163 out.go:179] * Starting "ha-472819-m02" control-plane node in "ha-472819" cluster
	I1101 11:09:47.993384  564163 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:09:47.996301  564163 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:09:47.999988  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:48.000022  564163 cache.go:59] Caching tarball of preloaded images
	I1101 11:09:48.000067  564163 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:09:48.000117  564163 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:09:48.000134  564163 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:09:48.000258  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:48.023925  564163 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:09:48.023946  564163 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:09:48.023960  564163 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:09:48.023985  564163 start.go:360] acquireMachinesLock for ha-472819-m02: {Name:mkd9b09c2f5958eb6cf9785ab2b809fc6e14102e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:09:48.024101  564163 start.go:364] duration metric: took 98.758µs to acquireMachinesLock for "ha-472819-m02"
	I1101 11:09:48.024126  564163 start.go:93] Provisioning new machine with config: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:09:48.024211  564163 start.go:125] createHost starting for "m02" (driver="docker")
	I1101 11:09:48.027535  564163 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 11:09:48.027682  564163 start.go:159] libmachine.API.Create for "ha-472819" (driver="docker")
	I1101 11:09:48.027709  564163 client.go:173] LocalClient.Create starting
	I1101 11:09:48.027787  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 11:09:48.027824  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:09:48.027850  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:09:48.027911  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 11:09:48.027933  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:09:48.027947  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:09:48.028229  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:09:48.045847  564163 network_create.go:77] Found existing network {name:ha-472819 subnet:0x4001e9bf20 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1101 11:09:48.045902  564163 kic.go:121] calculated static IP "192.168.49.3" for the "ha-472819-m02" container
	I1101 11:09:48.045995  564163 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 11:09:48.063585  564163 cli_runner.go:164] Run: docker volume create ha-472819-m02 --label name.minikube.sigs.k8s.io=ha-472819-m02 --label created_by.minikube.sigs.k8s.io=true
	I1101 11:09:48.081625  564163 oci.go:103] Successfully created a docker volume ha-472819-m02
	I1101 11:09:48.081759  564163 cli_runner.go:164] Run: docker run --rm --name ha-472819-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819-m02 --entrypoint /usr/bin/test -v ha-472819-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 11:09:48.719367  564163 oci.go:107] Successfully prepared a docker volume ha-472819-m02
	I1101 11:09:48.719403  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:48.719424  564163 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 11:09:48.719499  564163 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 11:09:53.148805  564163 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.429267175s)
	I1101 11:09:53.148842  564163 kic.go:203] duration metric: took 4.429414598s to extract preloaded images to volume ...
	W1101 11:09:53.148976  564163 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 11:09:53.149102  564163 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 11:09:53.205412  564163 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472819-m02 --name ha-472819-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472819-m02 --network ha-472819 --ip 192.168.49.3 --volume ha-472819-m02:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 11:09:53.540381  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Running}}
	I1101 11:09:53.560268  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:09:53.587014  564163 cli_runner.go:164] Run: docker exec ha-472819-m02 stat /var/lib/dpkg/alternatives/iptables
	I1101 11:09:53.644476  564163 oci.go:144] the created container "ha-472819-m02" has a running status.
	I1101 11:09:53.644505  564163 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa...
	I1101 11:09:53.818753  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 11:09:53.818798  564163 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 11:09:53.843513  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:09:53.867129  564163 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 11:09:53.867148  564163 kic_runner.go:114] Args: [docker exec --privileged ha-472819-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 11:09:53.915191  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:09:53.944431  564163 machine.go:94] provisionDockerMachine start ...
	I1101 11:09:53.944522  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:53.971708  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:53.972031  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1101 11:09:53.972040  564163 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:09:53.972718  564163 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 11:09:57.125375  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m02
	
	I1101 11:09:57.125400  564163 ubuntu.go:182] provisioning hostname "ha-472819-m02"
	I1101 11:09:57.125484  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:57.151306  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:57.151617  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1101 11:09:57.151628  564163 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-472819-m02 && echo "ha-472819-m02" | sudo tee /etc/hostname
	I1101 11:09:57.311009  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m02
	
	I1101 11:09:57.311162  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:57.329599  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:57.330232  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1101 11:09:57.330258  564163 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472819-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472819-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472819-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:09:57.478017  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:09:57.478047  564163 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:09:57.478067  564163 ubuntu.go:190] setting up certificates
	I1101 11:09:57.478078  564163 provision.go:84] configureAuth start
	I1101 11:09:57.478139  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:09:57.496426  564163 provision.go:143] copyHostCerts
	I1101 11:09:57.496480  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:09:57.496514  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:09:57.496527  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:09:57.496611  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:09:57.496704  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:09:57.496727  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:09:57.496735  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:09:57.496763  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:09:57.496816  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:09:57.496837  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:09:57.496845  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:09:57.496872  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:09:57.496927  564163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.ha-472819-m02 san=[127.0.0.1 192.168.49.3 ha-472819-m02 localhost minikube]
	I1101 11:09:58.109118  564163 provision.go:177] copyRemoteCerts
	I1101 11:09:58.109211  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:09:58.109257  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.129970  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:58.239761  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 11:09:58.239822  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:09:58.258364  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 11:09:58.258429  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:09:58.277456  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 11:09:58.277565  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 11:09:58.296719  564163 provision.go:87] duration metric: took 818.627177ms to configureAuth
	I1101 11:09:58.296743  564163 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:09:58.296931  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:58.297053  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.314394  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:58.314702  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1101 11:09:58.314723  564163 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:09:58.587462  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:09:58.587485  564163 machine.go:97] duration metric: took 4.643032549s to provisionDockerMachine
	I1101 11:09:58.587494  564163 client.go:176] duration metric: took 10.55977608s to LocalClient.Create
	I1101 11:09:58.587508  564163 start.go:167] duration metric: took 10.559829168s to libmachine.API.Create "ha-472819"
	I1101 11:09:58.587515  564163 start.go:293] postStartSetup for "ha-472819-m02" (driver="docker")
	I1101 11:09:58.587525  564163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:09:58.587591  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:09:58.587640  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.607665  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:58.713788  564163 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:09:58.716917  564163 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:09:58.716946  564163 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:09:58.716959  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:09:58.717015  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:09:58.717094  564163 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:09:58.717106  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /etc/ssl/certs/5347202.pem
	I1101 11:09:58.717202  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:09:58.725530  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:09:58.744321  564163 start.go:296] duration metric: took 156.786029ms for postStartSetup
	I1101 11:09:58.744753  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:09:58.763769  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:58.764152  564163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:09:58.764219  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.781753  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:58.882710  564163 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:09:58.887489  564163 start.go:128] duration metric: took 10.863262048s to createHost
	I1101 11:09:58.887514  564163 start.go:83] releasing machines lock for "ha-472819-m02", held for 10.863404934s
	I1101 11:09:58.887586  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:09:58.910138  564163 out.go:179] * Found network options:
	I1101 11:09:58.913143  564163 out.go:179]   - NO_PROXY=192.168.49.2
	W1101 11:09:58.916072  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 11:09:58.916119  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	I1101 11:09:58.916188  564163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:09:58.916238  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.916502  564163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:09:58.916556  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.940313  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:58.958881  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:59.095364  564163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:09:59.155074  564163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:09:59.155153  564163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:09:59.189592  564163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 11:09:59.189654  564163 start.go:496] detecting cgroup driver to use...
	I1101 11:09:59.189741  564163 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:09:59.189821  564163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:09:59.208490  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:09:59.221089  564163 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:09:59.221155  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:09:59.239119  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:09:59.259726  564163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:09:59.391794  564163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:09:59.526999  564163 docker.go:234] disabling docker service ...
	I1101 11:09:59.527094  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:09:59.551165  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:09:59.564683  564163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:09:59.689243  564163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:09:59.814478  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:09:59.830425  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:09:59.846686  564163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:09:59.846772  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.858345  564163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:09:59.858425  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.867679  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.876755  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.885935  564163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:09:59.894400  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.903514  564163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.917030  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.932785  564163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:09:59.941248  564163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:09:59.948963  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:00.218274  564163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:10:00.467897  564163 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:10:00.468023  564163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:10:00.473180  564163 start.go:564] Will wait 60s for crictl version
	I1101 11:10:00.473312  564163 ssh_runner.go:195] Run: which crictl
	I1101 11:10:00.478332  564163 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:10:00.530922  564163 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:10:00.531117  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:10:00.572083  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:10:00.626266  564163 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:10:00.630488  564163 out.go:179]   - env NO_PROXY=192.168.49.2
	I1101 11:10:00.640119  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:10:00.660609  564163 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 11:10:00.667050  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:10:00.680675  564163 mustload.go:66] Loading cluster: ha-472819
	I1101 11:10:00.680904  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:00.681185  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:10:00.703209  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:10:00.703529  564163 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819 for IP: 192.168.49.3
	I1101 11:10:00.703549  564163 certs.go:195] generating shared ca certs ...
	I1101 11:10:00.703567  564163 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:00.703707  564163 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:10:00.703893  564163 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:10:00.703917  564163 certs.go:257] generating profile certs ...
	I1101 11:10:00.704037  564163 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key
	I1101 11:10:00.704075  564163 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.4c464717
	I1101 11:10:00.704096  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.4c464717 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1101 11:10:00.826368  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.4c464717 ...
	I1101 11:10:00.826415  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.4c464717: {Name:mk86b52ad2762405e19fd51a0df3aa2cea75b088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:00.826658  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.4c464717 ...
	I1101 11:10:00.826682  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.4c464717: {Name:mk7b72151895b70df48e1e5a1aaae8ffe13ae0ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:00.826797  564163 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.4c464717 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt
	I1101 11:10:00.826971  564163 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.4c464717 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key
	I1101 11:10:00.827168  564163 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key
	I1101 11:10:00.827191  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 11:10:00.827207  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 11:10:00.827220  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 11:10:00.827235  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 11:10:00.827247  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 11:10:00.827260  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 11:10:00.827272  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 11:10:00.827284  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 11:10:00.827342  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:10:00.827371  564163 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:10:00.827380  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:10:00.827406  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:10:00.827430  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:10:00.827453  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:10:00.827502  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:10:00.827540  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:00.827559  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem -> /usr/share/ca-certificates/534720.pem
	I1101 11:10:00.827577  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /usr/share/ca-certificates/5347202.pem
	I1101 11:10:00.827662  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:10:00.847637  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:10:00.950109  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 11:10:00.954551  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 11:10:00.963598  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 11:10:00.967474  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 11:10:00.977011  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 11:10:00.981033  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 11:10:00.990888  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 11:10:00.995065  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 11:10:01.006340  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 11:10:01.011555  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 11:10:01.021211  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 11:10:01.025375  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 11:10:01.035197  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:10:01.059524  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:10:01.081340  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:10:01.103476  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:10:01.127247  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 11:10:01.151362  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:10:01.172198  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:10:01.192991  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:10:01.213776  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:10:01.236349  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:10:01.258068  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:10:01.278528  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 11:10:01.293835  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 11:10:01.309512  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 11:10:01.326059  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 11:10:01.340597  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 11:10:01.356408  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 11:10:01.370667  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 11:10:01.385223  564163 ssh_runner.go:195] Run: openssl version
	I1101 11:10:01.391999  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:10:01.401466  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:01.405725  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:01.405836  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:01.448102  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:10:01.458363  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:10:01.467703  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:10:01.471965  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:10:01.472078  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:10:01.516292  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:10:01.525777  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:10:01.535004  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:10:01.539426  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:10:01.539524  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:10:01.581188  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:10:01.590430  564163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:10:01.594810  564163 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:10:01.594902  564163 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1101 11:10:01.595058  564163 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-472819-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:10:01.595090  564163 kube-vip.go:115] generating kube-vip config ...
	I1101 11:10:01.595158  564163 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 11:10:01.610580  564163 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:10:01.610640  564163 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 11:10:01.610708  564163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:10:01.619440  564163 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:10:01.619523  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 11:10:01.628852  564163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 11:10:01.645438  564163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:10:01.660320  564163 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 11:10:01.674752  564163 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 11:10:01.678687  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:10:01.689054  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:01.817597  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:10:01.837667  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:10:01.838014  564163 start.go:318] joinCluster: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:10:01.838163  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1101 11:10:01.838226  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:10:01.859436  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:10:02.042488  564163 start.go:344] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:10:02.042580  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nj59cf.7iwf9knb5mhb6h6v --discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-472819-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I1101 11:10:18.171255  564163 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nj59cf.7iwf9knb5mhb6h6v --discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-472819-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (16.128647108s)
	I1101 11:10:18.171325  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1101 11:10:18.661883  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472819-m02 minikube.k8s.io/updated_at=2025_11_01T11_10_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=ha-472819 minikube.k8s.io/primary=false
	I1101 11:10:18.772634  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472819-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1101 11:10:18.882683  564163 start.go:320] duration metric: took 17.044664183s to joinCluster
	I1101 11:10:18.882755  564163 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:10:18.883009  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:18.885639  564163 out.go:179] * Verifying Kubernetes components...
	I1101 11:10:18.888568  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:19.048903  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:10:19.065355  564163 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 11:10:19.065459  564163 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 11:10:19.065823  564163 node_ready.go:35] waiting up to 6m0s for node "ha-472819-m02" to be "Ready" ...
	W1101 11:10:21.069930  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:23.070173  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:25.070653  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:27.070920  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:29.570450  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:32.070088  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:34.570636  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:37.069136  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:39.069864  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:41.070271  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:43.570410  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:46.070044  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:48.570002  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:51.069255  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:53.069352  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:55.071799  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:57.570240  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:59.570964  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:11:02.069796  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	I1101 11:11:02.573138  564163 node_ready.go:49] node "ha-472819-m02" is "Ready"
	I1101 11:11:02.573173  564163 node_ready.go:38] duration metric: took 43.507323629s for node "ha-472819-m02" to be "Ready" ...
	I1101 11:11:02.573188  564163 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:11:02.573247  564163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:11:02.592905  564163 api_server.go:72] duration metric: took 43.710116122s to wait for apiserver process to appear ...
	I1101 11:11:02.592930  564163 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:11:02.592950  564163 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 11:11:02.601671  564163 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 11:11:02.602744  564163 api_server.go:141] control plane version: v1.34.1
	I1101 11:11:02.602768  564163 api_server.go:131] duration metric: took 9.830793ms to wait for apiserver health ...
	I1101 11:11:02.602776  564163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:11:02.608318  564163 system_pods.go:59] 17 kube-system pods found
	I1101 11:11:02.608348  564163 system_pods.go:61] "coredns-66bc5c9577-bntfw" [17503733-2ab6-460c-aa3f-21d031c70abd] Running
	I1101 11:11:02.608355  564163 system_pods.go:61] "coredns-66bc5c9577-n2tp2" [4b6711b0-f71a-421e-922d-eb44266c95a4] Running
	I1101 11:11:02.608360  564163 system_pods.go:61] "etcd-ha-472819" [6807b695-9ca8-4691-8aac-87ff5cdaca11] Running
	I1101 11:11:02.608364  564163 system_pods.go:61] "etcd-ha-472819-m02" [3cef3cc2-cf4e-4445-a55c-ce64fd2279ff] Running
	I1101 11:11:02.608368  564163 system_pods.go:61] "kindnet-cw2kt" [70effae0-c034-4a35-b3d9-3e092c079100] Running
	I1101 11:11:02.608372  564163 system_pods.go:61] "kindnet-dkhrw" [abb3d05e-e447-4fe5-8996-26e79d7e2b4d] Running
	I1101 11:11:02.608376  564163 system_pods.go:61] "kube-apiserver-ha-472819" [a65e9eca-1f17-4ff9-b4d0-2b26612bc846] Running
	I1101 11:11:02.608380  564163 system_pods.go:61] "kube-apiserver-ha-472819-m02" [c94a478e-4714-4590-8c91-17468898125c] Running
	I1101 11:11:02.608385  564163 system_pods.go:61] "kube-controller-manager-ha-472819" [e6236069-2227-4783-b8e3-6df90e52e82c] Running
	I1101 11:11:02.608389  564163 system_pods.go:61] "kube-controller-manager-ha-472819-m02" [f5e22b4d-d7c1-47b0-a044-4007e77d6ebc] Running
	I1101 11:11:02.608398  564163 system_pods.go:61] "kube-proxy-47prj" [16f8f4f3-8267-4ce3-997b-1f4afb0f5104] Running
	I1101 11:11:02.608402  564163 system_pods.go:61] "kube-proxy-djfvb" [2c010b85-48bd-4004-886f-fbe4e03884a9] Running
	I1101 11:11:02.608407  564163 system_pods.go:61] "kube-scheduler-ha-472819" [78ac9fa6-2686-404f-a977-d7710745150b] Running
	I1101 11:11:02.608411  564163 system_pods.go:61] "kube-scheduler-ha-472819-m02" [31b58b00-ca07-42ad-a9a7-20da16f0a251] Running
	I1101 11:11:02.608415  564163 system_pods.go:61] "kube-vip-ha-472819" [0e1f82b1-9039-49f8-b83f-8c40ab9ec44f] Running
	I1101 11:11:02.608419  564163 system_pods.go:61] "kube-vip-ha-472819-m02" [8964dc5d-7184-43bf-a1bd-0f9b261bb9df] Running
	I1101 11:11:02.608424  564163 system_pods.go:61] "storage-provisioner" [18119b45-4932-4521-b0e9-e3a73bc6d3b1] Running
	I1101 11:11:02.608433  564163 system_pods.go:74] duration metric: took 5.651361ms to wait for pod list to return data ...
	I1101 11:11:02.608449  564163 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:11:02.615780  564163 default_sa.go:45] found service account: "default"
	I1101 11:11:02.615809  564163 default_sa.go:55] duration metric: took 7.352591ms for default service account to be created ...
	I1101 11:11:02.615820  564163 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:11:02.620640  564163 system_pods.go:86] 17 kube-system pods found
	I1101 11:11:02.620675  564163 system_pods.go:89] "coredns-66bc5c9577-bntfw" [17503733-2ab6-460c-aa3f-21d031c70abd] Running
	I1101 11:11:02.620683  564163 system_pods.go:89] "coredns-66bc5c9577-n2tp2" [4b6711b0-f71a-421e-922d-eb44266c95a4] Running
	I1101 11:11:02.620687  564163 system_pods.go:89] "etcd-ha-472819" [6807b695-9ca8-4691-8aac-87ff5cdaca11] Running
	I1101 11:11:02.620691  564163 system_pods.go:89] "etcd-ha-472819-m02" [3cef3cc2-cf4e-4445-a55c-ce64fd2279ff] Running
	I1101 11:11:02.620695  564163 system_pods.go:89] "kindnet-cw2kt" [70effae0-c034-4a35-b3d9-3e092c079100] Running
	I1101 11:11:02.620698  564163 system_pods.go:89] "kindnet-dkhrw" [abb3d05e-e447-4fe5-8996-26e79d7e2b4d] Running
	I1101 11:11:02.620705  564163 system_pods.go:89] "kube-apiserver-ha-472819" [a65e9eca-1f17-4ff9-b4d0-2b26612bc846] Running
	I1101 11:11:02.620710  564163 system_pods.go:89] "kube-apiserver-ha-472819-m02" [c94a478e-4714-4590-8c91-17468898125c] Running
	I1101 11:11:02.620714  564163 system_pods.go:89] "kube-controller-manager-ha-472819" [e6236069-2227-4783-b8e3-6df90e52e82c] Running
	I1101 11:11:02.620718  564163 system_pods.go:89] "kube-controller-manager-ha-472819-m02" [f5e22b4d-d7c1-47b0-a044-4007e77d6ebc] Running
	I1101 11:11:02.620722  564163 system_pods.go:89] "kube-proxy-47prj" [16f8f4f3-8267-4ce3-997b-1f4afb0f5104] Running
	I1101 11:11:02.620726  564163 system_pods.go:89] "kube-proxy-djfvb" [2c010b85-48bd-4004-886f-fbe4e03884a9] Running
	I1101 11:11:02.620731  564163 system_pods.go:89] "kube-scheduler-ha-472819" [78ac9fa6-2686-404f-a977-d7710745150b] Running
	I1101 11:11:02.620740  564163 system_pods.go:89] "kube-scheduler-ha-472819-m02" [31b58b00-ca07-42ad-a9a7-20da16f0a251] Running
	I1101 11:11:02.620745  564163 system_pods.go:89] "kube-vip-ha-472819" [0e1f82b1-9039-49f8-b83f-8c40ab9ec44f] Running
	I1101 11:11:02.620779  564163 system_pods.go:89] "kube-vip-ha-472819-m02" [8964dc5d-7184-43bf-a1bd-0f9b261bb9df] Running
	I1101 11:11:02.620787  564163 system_pods.go:89] "storage-provisioner" [18119b45-4932-4521-b0e9-e3a73bc6d3b1] Running
	I1101 11:11:02.620796  564163 system_pods.go:126] duration metric: took 4.970499ms to wait for k8s-apps to be running ...
	I1101 11:11:02.620805  564163 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:11:02.620889  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:11:02.642752  564163 system_svc.go:56] duration metric: took 21.936584ms WaitForService to wait for kubelet
	I1101 11:11:02.642777  564163 kubeadm.go:587] duration metric: took 43.759995961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:11:02.642797  564163 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:11:02.646454  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:11:02.646483  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:11:02.646495  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:11:02.646499  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:11:02.646505  564163 node_conditions.go:105] duration metric: took 3.701806ms to run NodePressure ...
	I1101 11:11:02.646517  564163 start.go:242] waiting for startup goroutines ...
	I1101 11:11:02.646543  564163 start.go:256] writing updated cluster config ...
	I1101 11:11:02.649940  564163 out.go:203] 
	I1101 11:11:02.652977  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:11:02.653122  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:11:02.656351  564163 out.go:179] * Starting "ha-472819-m03" control-plane node in "ha-472819" cluster
	I1101 11:11:02.659046  564163 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:11:02.661940  564163 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:11:02.664802  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:11:02.664836  564163 cache.go:59] Caching tarball of preloaded images
	I1101 11:11:02.664905  564163 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:11:02.664972  564163 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:11:02.664989  564163 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:11:02.665111  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:11:02.684654  564163 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:11:02.684679  564163 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:11:02.684692  564163 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:11:02.684714  564163 start.go:360] acquireMachinesLock for ha-472819-m03: {Name:mk3b84885ff8ece87965a525482df80362a95518 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:11:02.684837  564163 start.go:364] duration metric: took 95.632µs to acquireMachinesLock for "ha-472819-m03"
	I1101 11:11:02.684871  564163 start.go:93] Provisioning new machine with config: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:11:02.684979  564163 start.go:125] createHost starting for "m03" (driver="docker")
	I1101 11:11:02.688507  564163 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 11:11:02.688640  564163 start.go:159] libmachine.API.Create for "ha-472819" (driver="docker")
	I1101 11:11:02.688672  564163 client.go:173] LocalClient.Create starting
	I1101 11:11:02.688778  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 11:11:02.688818  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:11:02.688837  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:11:02.688891  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 11:11:02.688915  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:11:02.688930  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:11:02.689181  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:11:02.707661  564163 network_create.go:77] Found existing network {name:ha-472819 subnet:0x400141a090 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1101 11:11:02.707696  564163 kic.go:121] calculated static IP "192.168.49.4" for the "ha-472819-m03" container
	I1101 11:11:02.707771  564163 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 11:11:02.725657  564163 cli_runner.go:164] Run: docker volume create ha-472819-m03 --label name.minikube.sigs.k8s.io=ha-472819-m03 --label created_by.minikube.sigs.k8s.io=true
	I1101 11:11:02.747975  564163 oci.go:103] Successfully created a docker volume ha-472819-m03
	I1101 11:11:02.748068  564163 cli_runner.go:164] Run: docker run --rm --name ha-472819-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819-m03 --entrypoint /usr/bin/test -v ha-472819-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 11:11:03.319169  564163 oci.go:107] Successfully prepared a docker volume ha-472819-m03
	I1101 11:11:03.319219  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:11:03.319239  564163 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 11:11:03.319307  564163 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 11:11:07.770669  564163 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.451319736s)
	I1101 11:11:07.770702  564163 kic.go:203] duration metric: took 4.451458674s to extract preloaded images to volume ...
	W1101 11:11:07.770834  564163 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 11:11:07.770945  564163 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 11:11:07.830823  564163 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472819-m03 --name ha-472819-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472819-m03 --network ha-472819 --ip 192.168.49.4 --volume ha-472819-m03:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 11:11:08.209425  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Running}}
	I1101 11:11:08.233583  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:11:08.259513  564163 cli_runner.go:164] Run: docker exec ha-472819-m03 stat /var/lib/dpkg/alternatives/iptables
	I1101 11:11:08.318997  564163 oci.go:144] the created container "ha-472819-m03" has a running status.
	I1101 11:11:08.319026  564163 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa...
	I1101 11:11:08.824570  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 11:11:08.824677  564163 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 11:11:08.851513  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:11:08.872714  564163 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 11:11:08.872734  564163 kic_runner.go:114] Args: [docker exec --privileged ha-472819-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 11:11:08.929245  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:11:08.951783  564163 machine.go:94] provisionDockerMachine start ...
	I1101 11:11:08.951884  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:08.971759  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:08.972075  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1101 11:11:08.972084  564163 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:11:08.972780  564163 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 11:11:12.129991  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m03
	
	I1101 11:11:12.130016  564163 ubuntu.go:182] provisioning hostname "ha-472819-m03"
	I1101 11:11:12.130084  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:12.158484  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:12.158792  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1101 11:11:12.158807  564163 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-472819-m03 && echo "ha-472819-m03" | sudo tee /etc/hostname
	I1101 11:11:12.332540  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m03
	
	I1101 11:11:12.332619  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:12.351114  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:12.351425  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1101 11:11:12.351444  564163 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472819-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472819-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472819-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:11:12.502378  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:11:12.502450  564163 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:11:12.502486  564163 ubuntu.go:190] setting up certificates
	I1101 11:11:12.502527  564163 provision.go:84] configureAuth start
	I1101 11:11:12.502632  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:11:12.519713  564163 provision.go:143] copyHostCerts
	I1101 11:11:12.519756  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:11:12.519789  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:11:12.519796  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:11:12.519876  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:11:12.519955  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:11:12.519972  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:11:12.519977  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:11:12.520002  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:11:12.520040  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:11:12.520056  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:11:12.520060  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:11:12.520083  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:11:12.520129  564163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.ha-472819-m03 san=[127.0.0.1 192.168.49.4 ha-472819-m03 localhost minikube]
	I1101 11:11:13.612826  564163 provision.go:177] copyRemoteCerts
	I1101 11:11:13.612953  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:11:13.613033  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:13.637098  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:13.745943  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 11:11:13.746008  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:11:13.765430  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 11:11:13.765498  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 11:11:13.784115  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 11:11:13.784225  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:11:13.804599  564163 provision.go:87] duration metric: took 1.3020372s to configureAuth
	I1101 11:11:13.804630  564163 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:11:13.804902  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:11:13.805040  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:13.824430  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:13.824739  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1101 11:11:13.824766  564163 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:11:14.162205  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:11:14.162231  564163 machine.go:97] duration metric: took 5.210427743s to provisionDockerMachine
	I1101 11:11:14.162240  564163 client.go:176] duration metric: took 11.473529387s to LocalClient.Create
	I1101 11:11:14.162254  564163 start.go:167] duration metric: took 11.473618142s to libmachine.API.Create "ha-472819"
	I1101 11:11:14.162261  564163 start.go:293] postStartSetup for "ha-472819-m03" (driver="docker")
	I1101 11:11:14.162271  564163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:11:14.162342  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:11:14.162391  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:14.180648  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:14.290174  564163 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:11:14.293569  564163 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:11:14.293597  564163 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:11:14.293610  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:11:14.293673  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:11:14.293794  564163 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:11:14.293802  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /etc/ssl/certs/5347202.pem
	I1101 11:11:14.293910  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:11:14.301571  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:11:14.319949  564163 start.go:296] duration metric: took 157.672021ms for postStartSetup
	I1101 11:11:14.320309  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:11:14.339186  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:11:14.339488  564163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:11:14.339544  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:14.356118  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:14.462914  564163 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:11:14.467880  564163 start.go:128] duration metric: took 11.782886335s to createHost
	I1101 11:11:14.467906  564163 start.go:83] releasing machines lock for "ha-472819-m03", held for 11.783053073s
	I1101 11:11:14.467977  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:11:14.491429  564163 out.go:179] * Found network options:
	I1101 11:11:14.494078  564163 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1101 11:11:14.497086  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 11:11:14.497117  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 11:11:14.497140  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 11:11:14.497150  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	I1101 11:11:14.497218  564163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:11:14.497268  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:14.497522  564163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:11:14.497567  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:14.527486  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:14.535318  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:14.691428  564163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:11:14.751657  564163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:11:14.751737  564163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:11:14.784282  564163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 11:11:14.784309  564163 start.go:496] detecting cgroup driver to use...
	I1101 11:11:14.784342  564163 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:11:14.784395  564163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:11:14.804100  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:11:14.817895  564163 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:11:14.817997  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:11:14.836631  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:11:14.858424  564163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:11:14.995659  564163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:11:15.154010  564163 docker.go:234] disabling docker service ...
	I1101 11:11:15.154132  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:11:15.178485  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:11:15.193239  564163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:11:15.327679  564163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:11:15.454029  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:11:15.467802  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:11:15.484429  564163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:11:15.484525  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.494973  564163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:11:15.495089  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.505092  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.515189  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.525494  564163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:11:15.535841  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.545915  564163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.567914  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.577406  564163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:11:15.585734  564163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:11:15.594497  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:11:15.717478  564163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:11:15.855286  564163 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:11:15.855416  564163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:11:15.860080  564163 start.go:564] Will wait 60s for crictl version
	I1101 11:11:15.860199  564163 ssh_runner.go:195] Run: which crictl
	I1101 11:11:15.864416  564163 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:11:15.901128  564163 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:11:15.901276  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:11:15.939191  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:11:15.976924  564163 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:11:15.979768  564163 out.go:179]   - env NO_PROXY=192.168.49.2
	I1101 11:11:15.982623  564163 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1101 11:11:15.985569  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:11:16.003209  564163 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 11:11:16.009752  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:11:16.022020  564163 mustload.go:66] Loading cluster: ha-472819
	I1101 11:11:16.022297  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:11:16.022563  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:11:16.041810  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:11:16.042214  564163 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819 for IP: 192.168.49.4
	I1101 11:11:16.042226  564163 certs.go:195] generating shared ca certs ...
	I1101 11:11:16.042242  564163 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:11:16.042364  564163 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:11:16.042403  564163 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:11:16.042420  564163 certs.go:257] generating profile certs ...
	I1101 11:11:16.042507  564163 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key
	I1101 11:11:16.042544  564163 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.b77bbb0d
	I1101 11:11:16.042559  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.b77bbb0d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1101 11:11:17.419467  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.b77bbb0d ...
	I1101 11:11:17.419504  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.b77bbb0d: {Name:mke3ca75daab1021e235325f0aa6ae3fdb3aebaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:11:17.419709  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.b77bbb0d ...
	I1101 11:11:17.419723  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.b77bbb0d: {Name:mk05b86323e75bb15d0b4b2c07a8199585004a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:11:17.419819  564163 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.b77bbb0d -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt
	I1101 11:11:17.419955  564163 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.b77bbb0d -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key
	I1101 11:11:17.420103  564163 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key
	I1101 11:11:17.420121  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 11:11:17.420136  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 11:11:17.420154  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 11:11:17.420170  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 11:11:17.420183  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 11:11:17.420205  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 11:11:17.420223  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 11:11:17.420239  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 11:11:17.420291  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:11:17.420324  564163 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:11:17.420336  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:11:17.420360  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:11:17.420385  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:11:17.420410  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:11:17.420457  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:11:17.420490  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:11:17.420505  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem -> /usr/share/ca-certificates/534720.pem
	I1101 11:11:17.420518  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /usr/share/ca-certificates/5347202.pem
	I1101 11:11:17.420579  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:11:17.446123  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:11:17.550099  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 11:11:17.554459  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 11:11:17.564628  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 11:11:17.568775  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 11:11:17.578449  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 11:11:17.582314  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 11:11:17.596987  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 11:11:17.602143  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 11:11:17.612550  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 11:11:17.616760  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 11:11:17.625280  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 11:11:17.629323  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 11:11:17.637956  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:11:17.670252  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:11:17.690944  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:11:17.712898  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:11:17.732957  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1101 11:11:17.754424  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:11:17.774366  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:11:17.794672  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:11:17.813962  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:11:17.832048  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:11:17.851761  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:11:17.870024  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 11:11:17.882634  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 11:11:17.899889  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 11:11:17.913845  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 11:11:17.938379  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 11:11:17.953783  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 11:11:17.969134  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 11:11:17.984029  564163 ssh_runner.go:195] Run: openssl version
	I1101 11:11:17.990588  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:11:17.998983  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:11:18.003748  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:11:18.003824  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:11:18.047711  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:11:18.056769  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:11:18.066764  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:11:18.071057  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:11:18.071127  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:11:18.115164  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:11:18.125731  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:11:18.136765  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:11:18.143282  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:11:18.143350  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:11:18.186179  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:11:18.195359  564163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:11:18.199380  564163 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:11:18.199489  564163 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1101 11:11:18.199610  564163 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-472819-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:11:18.199643  564163 kube-vip.go:115] generating kube-vip config ...
	I1101 11:11:18.199705  564163 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 11:11:18.212173  564163 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:11:18.212238  564163 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 11:11:18.212301  564163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:11:18.220267  564163 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:11:18.220347  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 11:11:18.228465  564163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 11:11:18.244016  564163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:11:18.258096  564163 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 11:11:18.272629  564163 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 11:11:18.276508  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:11:18.287804  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:11:18.413977  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:11:18.431959  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:11:18.432256  564163 start.go:318] joinCluster: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:11:18.432437  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1101 11:11:18.432483  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:11:18.451746  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:11:18.642595  564163 start.go:344] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:11:18.642681  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hw2jul.ax6v1umh51v4f6c5 --discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-472819-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I1101 11:11:43.319187  564163 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hw2jul.ax6v1umh51v4f6c5 --discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-472819-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (24.676482939s)
	I1101 11:11:43.319257  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1101 11:11:43.769284  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472819-m03 minikube.k8s.io/updated_at=2025_11_01T11_11_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=ha-472819 minikube.k8s.io/primary=false
	I1101 11:11:43.908814  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472819-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1101 11:11:44.056105  564163 start.go:320] duration metric: took 25.623843323s to joinCluster
	I1101 11:11:44.056180  564163 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:11:44.056483  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:11:44.059138  564163 out.go:179] * Verifying Kubernetes components...
	I1101 11:11:44.062045  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:11:44.226994  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:11:44.241935  564163 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 11:11:44.242072  564163 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 11:11:44.243546  564163 node_ready.go:35] waiting up to 6m0s for node "ha-472819-m03" to be "Ready" ...
	W1101 11:11:46.248078  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:48.747123  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:50.747643  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:52.747859  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:55.248106  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:57.248199  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:59.747904  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:02.247750  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:04.747562  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:07.246739  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:09.246975  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:11.248196  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:13.747822  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:16.249127  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:18.747882  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:20.749775  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:23.247174  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:25.248192  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	I1101 11:12:26.249644  564163 node_ready.go:49] node "ha-472819-m03" is "Ready"
	I1101 11:12:26.249669  564163 node_ready.go:38] duration metric: took 42.006098949s for node "ha-472819-m03" to be "Ready" ...
	I1101 11:12:26.249682  564163 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:12:26.249800  564163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:12:26.262481  564163 api_server.go:72] duration metric: took 42.206265905s to wait for apiserver process to appear ...
	I1101 11:12:26.262504  564163 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:12:26.262523  564163 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 11:12:26.271275  564163 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 11:12:26.272261  564163 api_server.go:141] control plane version: v1.34.1
	I1101 11:12:26.272284  564163 api_server.go:131] duration metric: took 9.773431ms to wait for apiserver health ...
	I1101 11:12:26.272294  564163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:12:26.280500  564163 system_pods.go:59] 24 kube-system pods found
	I1101 11:12:26.280537  564163 system_pods.go:61] "coredns-66bc5c9577-bntfw" [17503733-2ab6-460c-aa3f-21d031c70abd] Running
	I1101 11:12:26.280544  564163 system_pods.go:61] "coredns-66bc5c9577-n2tp2" [4b6711b0-f71a-421e-922d-eb44266c95a4] Running
	I1101 11:12:26.280549  564163 system_pods.go:61] "etcd-ha-472819" [6807b695-9ca8-4691-8aac-87ff5cdaca11] Running
	I1101 11:12:26.280553  564163 system_pods.go:61] "etcd-ha-472819-m02" [3cef3cc2-cf4e-4445-a55c-ce64fd2279ff] Running
	I1101 11:12:26.280558  564163 system_pods.go:61] "etcd-ha-472819-m03" [80e840dc-9437-4351-967c-2a400d35dc89] Running
	I1101 11:12:26.280563  564163 system_pods.go:61] "kindnet-cw2kt" [70effae0-c034-4a35-b3d9-3e092c079100] Running
	I1101 11:12:26.280567  564163 system_pods.go:61] "kindnet-dkhrw" [abb3d05e-e447-4fe5-8996-26e79d7e2b4d] Running
	I1101 11:12:26.280572  564163 system_pods.go:61] "kindnet-mz6bw" [217b3b0a-0680-4a26-98ee-04dd92e1b732] Running
	I1101 11:12:26.280576  564163 system_pods.go:61] "kube-apiserver-ha-472819" [a65e9eca-1f17-4ff9-b4d0-2b26612bc846] Running
	I1101 11:12:26.280580  564163 system_pods.go:61] "kube-apiserver-ha-472819-m02" [c94a478e-4714-4590-8c91-17468898125c] Running
	I1101 11:12:26.280585  564163 system_pods.go:61] "kube-apiserver-ha-472819-m03" [4dd6c2e8-c1fd-4a41-b208-b227db99ef54] Running
	I1101 11:12:26.280595  564163 system_pods.go:61] "kube-controller-manager-ha-472819" [e6236069-2227-4783-b8e3-6df90e52e82c] Running
	I1101 11:12:26.280600  564163 system_pods.go:61] "kube-controller-manager-ha-472819-m02" [f5e22b4d-d7c1-47b0-a044-4007e77d6ebc] Running
	I1101 11:12:26.280607  564163 system_pods.go:61] "kube-controller-manager-ha-472819-m03" [a67b5941-388f-48a8-b452-ff50be57ca66] Running
	I1101 11:12:26.280613  564163 system_pods.go:61] "kube-proxy-47prj" [16f8f4f3-8267-4ce3-997b-1f4afb0f5104] Running
	I1101 11:12:26.280624  564163 system_pods.go:61] "kube-proxy-djfvb" [2c010b85-48bd-4004-886f-fbe4e03884a9] Running
	I1101 11:12:26.280628  564163 system_pods.go:61] "kube-proxy-gc4g4" [2289bf2a-0371-4bad-8440-6e299ce1e8a9] Running
	I1101 11:12:26.280632  564163 system_pods.go:61] "kube-scheduler-ha-472819" [78ac9fa6-2686-404f-a977-d7710745150b] Running
	I1101 11:12:26.280644  564163 system_pods.go:61] "kube-scheduler-ha-472819-m02" [31b58b00-ca07-42ad-a9a7-20da16f0a251] Running
	I1101 11:12:26.280648  564163 system_pods.go:61] "kube-scheduler-ha-472819-m03" [2b72cc38-a219-4fcc-8a1e-977391aee0b1] Running
	I1101 11:12:26.280652  564163 system_pods.go:61] "kube-vip-ha-472819" [0e1f82b1-9039-49f8-b83f-8c40ab9ec44f] Running
	I1101 11:12:26.280657  564163 system_pods.go:61] "kube-vip-ha-472819-m02" [8964dc5d-7184-43bf-a1bd-0f9b261bb9df] Running
	I1101 11:12:26.280666  564163 system_pods.go:61] "kube-vip-ha-472819-m03" [a3e5599c-b0a5-4792-9192-397f763006fc] Running
	I1101 11:12:26.280671  564163 system_pods.go:61] "storage-provisioner" [18119b45-4932-4521-b0e9-e3a73bc6d3b1] Running
	I1101 11:12:26.280676  564163 system_pods.go:74] duration metric: took 8.377598ms to wait for pod list to return data ...
	I1101 11:12:26.280686  564163 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:12:26.284133  564163 default_sa.go:45] found service account: "default"
	I1101 11:12:26.284159  564163 default_sa.go:55] duration metric: took 3.464322ms for default service account to be created ...
	I1101 11:12:26.284168  564163 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:12:26.289312  564163 system_pods.go:86] 24 kube-system pods found
	I1101 11:12:26.289355  564163 system_pods.go:89] "coredns-66bc5c9577-bntfw" [17503733-2ab6-460c-aa3f-21d031c70abd] Running
	I1101 11:12:26.289363  564163 system_pods.go:89] "coredns-66bc5c9577-n2tp2" [4b6711b0-f71a-421e-922d-eb44266c95a4] Running
	I1101 11:12:26.289367  564163 system_pods.go:89] "etcd-ha-472819" [6807b695-9ca8-4691-8aac-87ff5cdaca11] Running
	I1101 11:12:26.289372  564163 system_pods.go:89] "etcd-ha-472819-m02" [3cef3cc2-cf4e-4445-a55c-ce64fd2279ff] Running
	I1101 11:12:26.289378  564163 system_pods.go:89] "etcd-ha-472819-m03" [80e840dc-9437-4351-967c-2a400d35dc89] Running
	I1101 11:12:26.289397  564163 system_pods.go:89] "kindnet-cw2kt" [70effae0-c034-4a35-b3d9-3e092c079100] Running
	I1101 11:12:26.289411  564163 system_pods.go:89] "kindnet-dkhrw" [abb3d05e-e447-4fe5-8996-26e79d7e2b4d] Running
	I1101 11:12:26.289416  564163 system_pods.go:89] "kindnet-mz6bw" [217b3b0a-0680-4a26-98ee-04dd92e1b732] Running
	I1101 11:12:26.289421  564163 system_pods.go:89] "kube-apiserver-ha-472819" [a65e9eca-1f17-4ff9-b4d0-2b26612bc846] Running
	I1101 11:12:26.289429  564163 system_pods.go:89] "kube-apiserver-ha-472819-m02" [c94a478e-4714-4590-8c91-17468898125c] Running
	I1101 11:12:26.289433  564163 system_pods.go:89] "kube-apiserver-ha-472819-m03" [4dd6c2e8-c1fd-4a41-b208-b227db99ef54] Running
	I1101 11:12:26.289438  564163 system_pods.go:89] "kube-controller-manager-ha-472819" [e6236069-2227-4783-b8e3-6df90e52e82c] Running
	I1101 11:12:26.289442  564163 system_pods.go:89] "kube-controller-manager-ha-472819-m02" [f5e22b4d-d7c1-47b0-a044-4007e77d6ebc] Running
	I1101 11:12:26.289454  564163 system_pods.go:89] "kube-controller-manager-ha-472819-m03" [a67b5941-388f-48a8-b452-ff50be57ca66] Running
	I1101 11:12:26.289458  564163 system_pods.go:89] "kube-proxy-47prj" [16f8f4f3-8267-4ce3-997b-1f4afb0f5104] Running
	I1101 11:12:26.289461  564163 system_pods.go:89] "kube-proxy-djfvb" [2c010b85-48bd-4004-886f-fbe4e03884a9] Running
	I1101 11:12:26.289467  564163 system_pods.go:89] "kube-proxy-gc4g4" [2289bf2a-0371-4bad-8440-6e299ce1e8a9] Running
	I1101 11:12:26.289471  564163 system_pods.go:89] "kube-scheduler-ha-472819" [78ac9fa6-2686-404f-a977-d7710745150b] Running
	I1101 11:12:26.289475  564163 system_pods.go:89] "kube-scheduler-ha-472819-m02" [31b58b00-ca07-42ad-a9a7-20da16f0a251] Running
	I1101 11:12:26.289479  564163 system_pods.go:89] "kube-scheduler-ha-472819-m03" [2b72cc38-a219-4fcc-8a1e-977391aee0b1] Running
	I1101 11:12:26.289483  564163 system_pods.go:89] "kube-vip-ha-472819" [0e1f82b1-9039-49f8-b83f-8c40ab9ec44f] Running
	I1101 11:12:26.289491  564163 system_pods.go:89] "kube-vip-ha-472819-m02" [8964dc5d-7184-43bf-a1bd-0f9b261bb9df] Running
	I1101 11:12:26.289495  564163 system_pods.go:89] "kube-vip-ha-472819-m03" [a3e5599c-b0a5-4792-9192-397f763006fc] Running
	I1101 11:12:26.289499  564163 system_pods.go:89] "storage-provisioner" [18119b45-4932-4521-b0e9-e3a73bc6d3b1] Running
	I1101 11:12:26.289507  564163 system_pods.go:126] duration metric: took 5.334417ms to wait for k8s-apps to be running ...
	I1101 11:12:26.289518  564163 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:12:26.289578  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:12:26.305948  564163 system_svc.go:56] duration metric: took 16.419873ms WaitForService to wait for kubelet
	I1101 11:12:26.305975  564163 kubeadm.go:587] duration metric: took 42.249764697s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:12:26.305994  564163 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:12:26.309253  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:12:26.309279  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:12:26.309289  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:12:26.309294  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:12:26.309298  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:12:26.309303  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:12:26.309308  564163 node_conditions.go:105] duration metric: took 3.308316ms to run NodePressure ...
	I1101 11:12:26.309331  564163 start.go:242] waiting for startup goroutines ...
	I1101 11:12:26.309355  564163 start.go:256] writing updated cluster config ...
	I1101 11:12:26.309669  564163 ssh_runner.go:195] Run: rm -f paused
	I1101 11:12:26.313122  564163 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:12:26.313679  564163 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:12:26.330719  564163 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bntfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.336573  564163 pod_ready.go:94] pod "coredns-66bc5c9577-bntfw" is "Ready"
	I1101 11:12:26.336600  564163 pod_ready.go:86] duration metric: took 5.853087ms for pod "coredns-66bc5c9577-bntfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.336611  564163 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n2tp2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.343901  564163 pod_ready.go:94] pod "coredns-66bc5c9577-n2tp2" is "Ready"
	I1101 11:12:26.343929  564163 pod_ready.go:86] duration metric: took 7.293605ms for pod "coredns-66bc5c9577-n2tp2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.346848  564163 pod_ready.go:83] waiting for pod "etcd-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.352072  564163 pod_ready.go:94] pod "etcd-ha-472819" is "Ready"
	I1101 11:12:26.352102  564163 pod_ready.go:86] duration metric: took 5.227692ms for pod "etcd-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.352112  564163 pod_ready.go:83] waiting for pod "etcd-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.358541  564163 pod_ready.go:94] pod "etcd-ha-472819-m02" is "Ready"
	I1101 11:12:26.358573  564163 pod_ready.go:86] duration metric: took 6.453734ms for pod "etcd-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.358588  564163 pod_ready.go:83] waiting for pod "etcd-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.515018  564163 request.go:683] "Waited before sending request" delay="156.271647ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472819-m03"
	I1101 11:12:26.714759  564163 request.go:683] "Waited before sending request" delay="196.328427ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:26.718151  564163 pod_ready.go:94] pod "etcd-ha-472819-m03" is "Ready"
	I1101 11:12:26.718181  564163 pod_ready.go:86] duration metric: took 359.546189ms for pod "etcd-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.914587  564163 request.go:683] "Waited before sending request" delay="196.293202ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1101 11:12:26.918462  564163 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.115021  564163 request.go:683] "Waited before sending request" delay="196.450874ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472819"
	I1101 11:12:27.314790  564163 request.go:683] "Waited before sending request" delay="196.347915ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819"
	I1101 11:12:27.318165  564163 pod_ready.go:94] pod "kube-apiserver-ha-472819" is "Ready"
	I1101 11:12:27.318193  564163 pod_ready.go:86] duration metric: took 399.695688ms for pod "kube-apiserver-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.318203  564163 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.514681  564163 request.go:683] "Waited before sending request" delay="196.361741ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472819-m02"
	I1101 11:12:27.714676  564163 request.go:683] "Waited before sending request" delay="196.346856ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m02"
	I1101 11:12:27.718653  564163 pod_ready.go:94] pod "kube-apiserver-ha-472819-m02" is "Ready"
	I1101 11:12:27.718733  564163 pod_ready.go:86] duration metric: took 400.522554ms for pod "kube-apiserver-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.718769  564163 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.915203  564163 request.go:683] "Waited before sending request" delay="196.340399ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472819-m03"
	I1101 11:12:28.114358  564163 request.go:683] "Waited before sending request" delay="195.249259ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:28.117775  564163 pod_ready.go:94] pod "kube-apiserver-ha-472819-m03" is "Ready"
	I1101 11:12:28.117807  564163 pod_ready.go:86] duration metric: took 399.013365ms for pod "kube-apiserver-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:28.314349  564163 request.go:683] "Waited before sending request" delay="196.417856ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1101 11:12:28.318465  564163 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:28.514898  564163 request.go:683] "Waited before sending request" delay="196.319665ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472819"
	I1101 11:12:28.714748  564163 request.go:683] "Waited before sending request" delay="196.35411ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819"
	I1101 11:12:28.718210  564163 pod_ready.go:94] pod "kube-controller-manager-ha-472819" is "Ready"
	I1101 11:12:28.718240  564163 pod_ready.go:86] duration metric: took 399.743024ms for pod "kube-controller-manager-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:28.718250  564163 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:28.914684  564163 request.go:683] "Waited before sending request" delay="196.336731ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472819-m02"
	I1101 11:12:29.114836  564163 request.go:683] "Waited before sending request" delay="196.362029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m02"
	I1101 11:12:29.118262  564163 pod_ready.go:94] pod "kube-controller-manager-ha-472819-m02" is "Ready"
	I1101 11:12:29.118307  564163 pod_ready.go:86] duration metric: took 400.051245ms for pod "kube-controller-manager-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:29.118318  564163 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:29.314606  564163 request.go:683] "Waited before sending request" delay="196.212242ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472819-m03"
	I1101 11:12:29.514310  564163 request.go:683] "Waited before sending request" delay="196.164808ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:29.519832  564163 pod_ready.go:94] pod "kube-controller-manager-ha-472819-m03" is "Ready"
	I1101 11:12:29.519865  564163 pod_ready.go:86] duration metric: took 401.539002ms for pod "kube-controller-manager-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:29.715277  564163 request.go:683] "Waited before sending request" delay="195.313572ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1101 11:12:29.719110  564163 pod_ready.go:83] waiting for pod "kube-proxy-47prj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:29.914444  564163 request.go:683] "Waited before sending request" delay="195.229952ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47prj"
	I1101 11:12:30.115044  564163 request.go:683] "Waited before sending request" delay="197.189419ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m02"
	I1101 11:12:30.122807  564163 pod_ready.go:94] pod "kube-proxy-47prj" is "Ready"
	I1101 11:12:30.122901  564163 pod_ready.go:86] duration metric: took 403.759526ms for pod "kube-proxy-47prj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:30.122920  564163 pod_ready.go:83] waiting for pod "kube-proxy-djfvb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:30.314268  564163 request.go:683] "Waited before sending request" delay="191.266408ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djfvb"
	I1101 11:12:30.515306  564163 request.go:683] "Waited before sending request" delay="197.533768ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819"
	I1101 11:12:30.518543  564163 pod_ready.go:94] pod "kube-proxy-djfvb" is "Ready"
	I1101 11:12:30.518576  564163 pod_ready.go:86] duration metric: took 395.647433ms for pod "kube-proxy-djfvb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:30.518587  564163 pod_ready.go:83] waiting for pod "kube-proxy-gc4g4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:30.715050  564163 request.go:683] "Waited before sending request" delay="196.355957ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gc4g4"
	I1101 11:12:30.915104  564163 request.go:683] "Waited before sending request" delay="194.316785ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:30.920032  564163 pod_ready.go:94] pod "kube-proxy-gc4g4" is "Ready"
	I1101 11:12:30.920064  564163 pod_ready.go:86] duration metric: took 401.469274ms for pod "kube-proxy-gc4g4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.114457  564163 request.go:683] "Waited before sending request" delay="194.275438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1101 11:12:31.118943  564163 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.314321  564163 request.go:683] "Waited before sending request" delay="195.278536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-472819"
	I1101 11:12:31.515230  564163 request.go:683] "Waited before sending request" delay="197.303661ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819"
	I1101 11:12:31.518454  564163 pod_ready.go:94] pod "kube-scheduler-ha-472819" is "Ready"
	I1101 11:12:31.518487  564163 pod_ready.go:86] duration metric: took 399.509488ms for pod "kube-scheduler-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.518498  564163 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.714950  564163 request.go:683] "Waited before sending request" delay="196.3489ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-472819-m02"
	I1101 11:12:31.914922  564163 request.go:683] "Waited before sending request" delay="196.326566ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m02"
	I1101 11:12:31.918119  564163 pod_ready.go:94] pod "kube-scheduler-ha-472819-m02" is "Ready"
	I1101 11:12:31.918150  564163 pod_ready.go:86] duration metric: took 399.645153ms for pod "kube-scheduler-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.918159  564163 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:32.114581  564163 request.go:683] "Waited before sending request" delay="196.334475ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-472819-m03"
	I1101 11:12:32.315116  564163 request.go:683] "Waited before sending request" delay="196.314915ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:32.318291  564163 pod_ready.go:94] pod "kube-scheduler-ha-472819-m03" is "Ready"
	I1101 11:12:32.318319  564163 pod_ready.go:86] duration metric: took 400.143654ms for pod "kube-scheduler-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:32.318333  564163 pod_ready.go:40] duration metric: took 6.005166383s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:12:32.374771  564163 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 11:12:32.379957  564163 out.go:179] * Done! kubectl is now configured to use "ha-472819" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:10:29 ha-472819 crio[835]: time="2025-11-01T11:10:29.724673509Z" level=info msg="Created container b91918178a88a5685429c28d6c36fba100356470fd0f83517aa7e116b189eb4a: kube-system/coredns-66bc5c9577-n2tp2/coredns" id=4a2a84c1-22ae-4f46-8149-9f44dec0e1df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:10:29 ha-472819 crio[835]: time="2025-11-01T11:10:29.726091618Z" level=info msg="Starting container: b91918178a88a5685429c28d6c36fba100356470fd0f83517aa7e116b189eb4a" id=182c8112-171d-4979-810a-ec966f9ee5bd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:10:29 ha-472819 crio[835]: time="2025-11-01T11:10:29.728389345Z" level=info msg="Started container" PID=1830 containerID=b91918178a88a5685429c28d6c36fba100356470fd0f83517aa7e116b189eb4a description=kube-system/coredns-66bc5c9577-n2tp2/coredns id=182c8112-171d-4979-810a-ec966f9ee5bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c45f2568b0e8e33cb1da636920d9b841b29c754a967265ee7a2ff1803ba718d
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.120480779Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-lm6r8/POD" id=ae159a9e-0cdc-4387-8101-f3339714c067 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.120560313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.151990285Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-lm6r8 Namespace:default ID:1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6 UID:3faf7e64-22cf-4338-92ef-39a2978dacb5 NetNS:/var/run/netns/273515f3-fe48-4c9a-a5a0-ca5b0e3ab433 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d90530}] Aliases:map[]}"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.152188171Z" level=info msg="Adding pod default_busybox-7b57f96db7-lm6r8 to CNI network \"kindnet\" (type=ptp)"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.174049137Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-lm6r8 Namespace:default ID:1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6 UID:3faf7e64-22cf-4338-92ef-39a2978dacb5 NetNS:/var/run/netns/273515f3-fe48-4c9a-a5a0-ca5b0e3ab433 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d90530}] Aliases:map[]}"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.174388982Z" level=info msg="Checking pod default_busybox-7b57f96db7-lm6r8 for CNI network kindnet (type=ptp)"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.18392826Z" level=info msg="Ran pod sandbox 1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6 with infra container: default/busybox-7b57f96db7-lm6r8/POD" id=ae159a9e-0cdc-4387-8101-f3339714c067 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.18737883Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=f0278987-4284-4ba6-99d3-3e1b3bcbe42b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.187693714Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=f0278987-4284-4ba6-99d3-3e1b3bcbe42b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.187803098Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28 found" id=f0278987-4284-4ba6-99d3-3e1b3bcbe42b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.189366827Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=8af5159f-b7ff-433f-9810-f5aaf54d8516 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.19276737Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.210173626Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=8af5159f-b7ff-433f-9810-f5aaf54d8516 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.211378047Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=62e5c4c6-2347-4768-b7b2-e7d361a90c66 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.213299064Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=9d165b72-72dd-44dd-baee-e3b9079ec16f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.224587271Z" level=info msg="Creating container: default/busybox-7b57f96db7-lm6r8/busybox" id=f3cbc5a1-6aaf-468c-ad23-e310e4e6c169 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.224848058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.249270156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.250049973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.282130596Z" level=info msg="Created container dff6a4a869cee8df9dc4d3d269f3081a5f7b6994fbe3813528d07d7a06f03fb6: default/busybox-7b57f96db7-lm6r8/busybox" id=f3cbc5a1-6aaf-468c-ad23-e310e4e6c169 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.288280744Z" level=info msg="Starting container: dff6a4a869cee8df9dc4d3d269f3081a5f7b6994fbe3813528d07d7a06f03fb6" id=0733d7c4-1f36-4706-b6d2-98a90d511fb9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.300369166Z" level=info msg="Started container" PID=1986 containerID=dff6a4a869cee8df9dc4d3d269f3081a5f7b6994fbe3813528d07d7a06f03fb6 description=default/busybox-7b57f96db7-lm6r8/busybox id=0733d7c4-1f36-4706-b6d2-98a90d511fb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	dff6a4a869cee       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   10 minutes ago      Running             busybox                   0                   1d1abc560619e       busybox-7b57f96db7-lm6r8            default
	b91918178a88a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 minutes ago      Running             coredns                   0                   2c45f2568b0e8       coredns-66bc5c9577-n2tp2            kube-system
	c8ab7117746d2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 minutes ago      Running             coredns                   0                   f161ed77d0204       coredns-66bc5c9577-bntfw            kube-system
	f3816faa8e434       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 minutes ago      Running             storage-provisioner       0                   48ac5b7666614       storage-provisioner                 kube-system
	7078104c50ff2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      12 minutes ago      Running             kube-proxy                0                   cbb2812743bd4       kube-proxy-djfvb                    kube-system
	6af4febe46d8a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      12 minutes ago      Running             kindnet-cni               0                   45d9b924aafb5       kindnet-dkhrw                       kube-system
	58f10619def7f       ghcr.io/kube-vip/kube-vip@sha256:a9c131fb1bd4690cd4563761c2f545eb89b92cc8ea19aec96c833d1b4b0211eb     13 minutes ago      Running             kube-vip                  0                   1eb623b05a53f       kube-vip-ha-472819                  kube-system
	91af80c077c55       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      13 minutes ago      Running             kube-apiserver            0                   86e0901f54771       kube-apiserver-ha-472819            kube-system
	f940f08b4a7e5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      13 minutes ago      Running             kube-scheduler            0                   956e3189233cf       kube-scheduler-ha-472819            kube-system
	640585dbb86b9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      13 minutes ago      Running             etcd                      0                   b159389b39c8d       etcd-ha-472819                      kube-system
	6bf6ea4411cda       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      13 minutes ago      Running             kube-controller-manager   0                   feba7cf49ce4a       kube-controller-manager-ha-472819   kube-system
	
	
	==> coredns [b91918178a88a5685429c28d6c36fba100356470fd0f83517aa7e116b189eb4a] <==
	[INFO] 10.244.2.2:44007 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.115679076s
	[INFO] 10.244.0.4:48262 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000087016s
	[INFO] 10.244.1.2:44384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134082s
	[INFO] 10.244.1.2:56915 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.0000838s
	[INFO] 10.244.1.2:41130 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.00008604s
	[INFO] 10.244.2.2:49290 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001659803s
	[INFO] 10.244.2.2:38229 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154456s
	[INFO] 10.244.0.4:54270 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004350553s
	[INFO] 10.244.0.4:50580 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195005s
	[INFO] 10.244.1.2:42868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013002s
	[INFO] 10.244.1.2:36862 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001492458s
	[INFO] 10.244.1.2:48136 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177652s
	[INFO] 10.244.1.2:50876 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135198s
	[INFO] 10.244.1.2:44573 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001077618s
	[INFO] 10.244.1.2:38478 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152281s
	[INFO] 10.244.2.2:52114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135033s
	[INFO] 10.244.2.2:49246 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184634s
	[INFO] 10.244.2.2:58049 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00018548s
	[INFO] 10.244.2.2:35795 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086098s
	[INFO] 10.244.0.4:60969 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090381s
	[INFO] 10.244.1.2:54184 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173106s
	[INFO] 10.244.1.2:37354 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067488s
	[INFO] 10.244.2.2:38119 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161463s
	[INFO] 10.244.2.2:47922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124909s
	[INFO] 10.244.0.4:45686 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119181s
	
	
	==> coredns [c8ab7117746d22a14339221aee8d8b6add959c38472cacd236bfc7b815920794] <==
	[INFO] 10.244.2.2:46792 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103722s
	[INFO] 10.244.2.2:60032 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116769s
	[INFO] 10.244.2.2:32816 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157507s
	[INFO] 10.244.0.4:34068 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119369s
	[INFO] 10.244.0.4:37836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003113598s
	[INFO] 10.244.0.4:33426 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00023439s
	[INFO] 10.244.0.4:48388 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131604s
	[INFO] 10.244.0.4:39230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00017862s
	[INFO] 10.244.0.4:50943 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083883s
	[INFO] 10.244.1.2:46996 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014355s
	[INFO] 10.244.1.2:58261 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099439s
	[INFO] 10.244.0.4:58425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010762s
	[INFO] 10.244.0.4:55611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159009s
	[INFO] 10.244.0.4:48378 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115505s
	[INFO] 10.244.1.2:38348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116711s
	[INFO] 10.244.1.2:59106 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220499s
	[INFO] 10.244.2.2:37813 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000152002s
	[INFO] 10.244.2.2:56106 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019215s
	[INFO] 10.244.0.4:41265 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094089s
	[INFO] 10.244.0.4:47425 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059792s
	[INFO] 10.244.0.4:33602 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068554s
	[INFO] 10.244.1.2:35104 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143871s
	[INFO] 10.244.1.2:59782 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081625s
	[INFO] 10.244.1.2:39995 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080247s
	[INFO] 10.244.1.2:46827 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061088s
	
	
	==> describe nodes <==
	Name:               ha-472819
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-472819
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=ha-472819
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_09_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:09:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472819
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:22:37 +0000   Sat, 01 Nov 2025 11:09:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:22:37 +0000   Sat, 01 Nov 2025 11:09:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:22:37 +0000   Sat, 01 Nov 2025 11:09:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:22:37 +0000   Sat, 01 Nov 2025 11:10:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472819
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                60304d9d-d149-4b0e-8acf-98dc18a25376
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lm6r8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-bntfw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 coredns-66bc5c9577-n2tp2             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-ha-472819                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-dkhrw                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-472819             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472819    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-djfvb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-472819             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472819                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 13m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m   kubelet          Node ha-472819 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m   kubelet          Node ha-472819 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m   kubelet          Node ha-472819 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node ha-472819 event: Registered Node ha-472819 in Controller
	  Normal   RegisteredNode           12m   node-controller  Node ha-472819 event: Registered Node ha-472819 in Controller
	  Normal   NodeReady                12m   kubelet          Node ha-472819 status is now: NodeReady
	  Normal   RegisteredNode           11m   node-controller  Node ha-472819 event: Registered Node ha-472819 in Controller
	
	
	Name:               ha-472819-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-472819-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=ha-472819
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_01T11_10_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:10:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472819-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:14:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 01 Nov 2025 11:12:41 +0000   Sat, 01 Nov 2025 11:14:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 01 Nov 2025 11:12:41 +0000   Sat, 01 Nov 2025 11:14:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 01 Nov 2025 11:12:41 +0000   Sat, 01 Nov 2025 11:14:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 01 Nov 2025 11:12:41 +0000   Sat, 01 Nov 2025 11:14:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472819-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c598c781-8aa3-4c9a-acbe-21bfb38aa260
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-x679v                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-472819-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-cw2kt                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-472819-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-472819-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-47prj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-472819-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-472819-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        12m    kube-proxy       
	  Normal  RegisteredNode  12m    node-controller  Node ha-472819-m02 event: Registered Node ha-472819-m02 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-472819-m02 event: Registered Node ha-472819-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-472819-m02 event: Registered Node ha-472819-m02 in Controller
	  Normal  NodeNotReady    7m47s  node-controller  Node ha-472819-m02 status is now: NodeNotReady
	
	
	Name:               ha-472819-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-472819-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=ha-472819
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_01T11_11_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:11:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472819-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:18:01 +0000   Sat, 01 Nov 2025 11:11:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:18:01 +0000   Sat, 01 Nov 2025 11:11:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:18:01 +0000   Sat, 01 Nov 2025 11:11:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:18:01 +0000   Sat, 01 Nov 2025 11:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-472819-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                dab78d92-59b1-457d-81c0-7efcc6e5bf35
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-7m8cp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-472819-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-mz6bw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-472819-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-472819-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-gc4g4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-472819-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-472819-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                Age   From             Message
	  ----    ------                ----  ----             -------
	  Normal  Starting              10m   kube-proxy       
	  Normal  CIDRAssignmentFailed  11m   cidrAllocator    Node ha-472819-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode        10m   node-controller  Node ha-472819-m03 event: Registered Node ha-472819-m03 in Controller
	  Normal  RegisteredNode        10m   node-controller  Node ha-472819-m03 event: Registered Node ha-472819-m03 in Controller
	  Normal  RegisteredNode        10m   node-controller  Node ha-472819-m03 event: Registered Node ha-472819-m03 in Controller
	
	
	Name:               ha-472819-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-472819-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=ha-472819
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_01T11_13_00_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:12:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472819-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:22:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:20:48 +0000   Sat, 01 Nov 2025 11:12:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:20:48 +0000   Sat, 01 Nov 2025 11:12:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:20:48 +0000   Sat, 01 Nov 2025 11:12:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:20:48 +0000   Sat, 01 Nov 2025 11:13:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-472819-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e495f925-4ef2-41a0-86db-65c0daddf116
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-x67zv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 kindnet-88sf2               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m44s
	  kube-system                 kube-proxy-79nw9            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m41s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     9m44s                  cidrAllocator    Node ha-472819-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     9m44s                  cidrAllocator    Node ha-472819-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  9m44s (x3 over 9m44s)  kubelet          Node ha-472819-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m44s (x3 over 9m44s)  kubelet          Node ha-472819-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m44s (x3 over 9m44s)  kubelet          Node ha-472819-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m42s                  node-controller  Node ha-472819-m04 event: Registered Node ha-472819-m04 in Controller
	  Normal  RegisteredNode           9m42s                  node-controller  Node ha-472819-m04 event: Registered Node ha-472819-m04 in Controller
	  Normal  RegisteredNode           9m42s                  node-controller  Node ha-472819-m04 event: Registered Node ha-472819-m04 in Controller
	  Normal  NodeReady                9m1s                   kubelet          Node ha-472819-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[  +9.289237] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:40] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[ +12.370416] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:55] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:09] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [640585dbb86b99532c1a5c54e4cb7548846d3ee044b85ae39e75a467ff5a3081] <==
	{"level":"warn","ts":"2025-11-01T11:22:16.326748Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:16.722760Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:16.722775Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:20.328310Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:20.328463Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:21.723300Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:21.723316Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:24.330041Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:24.330107Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:26.724057Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:26.724046Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:28.331190Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:28.331378Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:31.725003Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:31.725022Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:32.332651Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:32.332707Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:36.334390Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:36.334512Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:36.725833Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:36.725839Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:40.335613Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:40.335670Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:41.726016Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:41.726102Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 11:22:44 up  3:05,  0 user,  load average: 0.90, 0.96, 1.36
	Linux ha-472819 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6af4febe46d8a121ee2c8a9dbe81d96bb1173a205d2aadbbf0c7fd9d38d70f1b] <==
	I1101 11:22:08.933679       1 main.go:324] Node ha-472819-m02 has CIDR [10.244.1.0/24] 
	I1101 11:22:18.941992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:22:18.942027       1 main.go:301] handling current node
	I1101 11:22:18.942044       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1101 11:22:18.942051       1 main.go:324] Node ha-472819-m02 has CIDR [10.244.1.0/24] 
	I1101 11:22:18.942222       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1101 11:22:18.942231       1 main.go:324] Node ha-472819-m03 has CIDR [10.244.2.0/24] 
	I1101 11:22:18.942293       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1101 11:22:18.942304       1 main.go:324] Node ha-472819-m04 has CIDR [10.244.4.0/24] 
	I1101 11:22:28.940035       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1101 11:22:28.940091       1 main.go:324] Node ha-472819-m04 has CIDR [10.244.4.0/24] 
	I1101 11:22:28.940300       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:22:28.940311       1 main.go:301] handling current node
	I1101 11:22:28.940352       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1101 11:22:28.940360       1 main.go:324] Node ha-472819-m02 has CIDR [10.244.1.0/24] 
	I1101 11:22:28.940440       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1101 11:22:28.940446       1 main.go:324] Node ha-472819-m03 has CIDR [10.244.2.0/24] 
	I1101 11:22:38.933759       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:22:38.933796       1 main.go:301] handling current node
	I1101 11:22:38.933812       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1101 11:22:38.933819       1 main.go:324] Node ha-472819-m02 has CIDR [10.244.1.0/24] 
	I1101 11:22:38.933954       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1101 11:22:38.933969       1 main.go:324] Node ha-472819-m03 has CIDR [10.244.2.0/24] 
	I1101 11:22:38.934025       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1101 11:22:38.934036       1 main.go:324] Node ha-472819-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8] <==
	I1101 11:09:40.797825       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:09:40.850033       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:09:40.915102       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 11:09:40.924959       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1101 11:09:40.926249       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 11:09:40.931594       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 11:09:41.095631       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 11:09:41.859882       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 11:09:41.878887       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 11:09:41.887812       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 11:09:46.450412       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 11:09:47.188658       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:09:47.202210       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:09:47.251429       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 11:12:37.069649       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1101 11:12:37.755999       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39660: use of closed network connection
	E1101 11:12:38.787350       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39724: use of closed network connection
	E1101 11:12:39.032436       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39744: use of closed network connection
	E1101 11:12:39.263792       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39766: use of closed network connection
	E1101 11:12:39.693531       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39796: use of closed network connection
	E1101 11:12:40.115015       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39810: use of closed network connection
	E1101 11:12:40.338287       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39828: use of closed network connection
	E1101 11:12:40.547346       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39850: use of closed network connection
	E1101 11:12:40.760719       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39870: use of closed network connection
	I1101 11:19:38.988023       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6bf6ea4411cda5dbfef374975a27f08c60164beec1853c8ba8df3c4f23b6c666] <==
	I1101 11:09:46.183504       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472819" podCIDRs=["10.244.0.0/24"]
	I1101 11:09:46.190773       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 11:10:18.068385       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472819-m02\" does not exist"
	I1101 11:10:18.086957       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472819-m02" podCIDRs=["10.244.1.0/24"]
	I1101 11:10:21.154242       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472819-m02"
	I1101 11:10:30.317774       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tql2r EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tql2r\": the object has been modified; please apply your changes to the latest version and try again"
	I1101 11:10:30.318055       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e71f1e87-b843-4235-9d7d-ceeca6034661", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tql2r EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tql2r": the object has been modified; please apply your changes to the latest version and try again
	I1101 11:10:31.155699       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1101 11:11:42.459308       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-drx6q failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-drx6q\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1101 11:11:42.471104       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-drx6q failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-drx6q\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1101 11:11:42.992046       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472819-m03\" does not exist"
	I1101 11:11:43.069592       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472819-m03" podCIDRs=["10.244.2.0/24"]
	I1101 11:11:46.211739       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472819-m03"
	E1101 11:12:59.200871       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-mdtrv failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-mdtrv\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1101 11:12:59.200944       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-mdtrv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-mdtrv\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1101 11:12:59.416895       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472819-m04\" does not exist"
	E1101 11:12:59.620317       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-472819-m04\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.3.0/24\",\"10.244.4.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-472819-m04" podCIDRs=["10.244.3.0/24"]
	E1101 11:12:59.620451       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-472819-m04\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.3.0/24\",\"10.244.4.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-472819-m04"
	E1101 11:12:59.620535       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'ha-472819-m04': failed to patch node CIDR: Node \"ha-472819-m04\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.3.0/24\",\"10.244.4.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	E1101 11:12:59.854362       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"c68c3d5f-200a-4729-99a6-399d13923da3\", ResourceVersion:\"902\", Generation:1, CreationTimestamp:time.Date(2025, time.November, 1, 11, 9, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40019ffba0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\",
Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(
nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4002d4d580), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e12768), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeS
ource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolu
meSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e12780), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualD
iskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.1\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0x4002c98ab0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Resou
rceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecyc
le:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0x4002a864e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0x4002d266f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4002f47a70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tolerat
ionSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4002d0b960)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4002d26750)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:
3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1101 11:13:01.243524       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472819-m04"
	I1101 11:13:42.484705       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-472819-m04"
	I1101 11:14:56.316641       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-472819-m04"
	I1101 11:19:56.579535       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-x679v"
	
	
	==> kube-proxy [7078104c50ff2f92f7e2c1df5b91f0bd0cf730fe4a2b36f8082f1d451dd65225] <==
	I1101 11:09:48.742762       1 server_linux.go:53] "Using iptables proxy"
	I1101 11:09:48.836468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:09:48.950591       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:09:48.950696       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 11:09:48.950821       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:09:49.042998       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:09:49.043148       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:09:49.123164       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:09:49.123454       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:09:49.123477       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:09:49.125197       1 config.go:200] "Starting service config controller"
	I1101 11:09:49.125224       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:09:49.125242       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:09:49.125247       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:09:49.125257       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:09:49.125261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:09:49.126013       1 config.go:309] "Starting node config controller"
	I1101 11:09:49.126032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:09:49.126039       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:09:49.225716       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:09:49.225755       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:09:49.225779       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f940f08b4a7e5f2a89503aec05980619c7af103b702262fe033b3ddbff81a5db] <==
	I1101 11:12:33.814471       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-7m8cp" node="ha-472819-m03"
	E1101 11:12:59.527016       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j9j6f\": pod kube-proxy-j9j6f is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j9j6f" node="ha-472819-m04"
	E1101 11:12:59.527158       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 82874d15-7a7e-4291-bdfe-322ff3beceb7(kube-system/kube-proxy-j9j6f) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-j9j6f"
	E1101 11:12:59.527232       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j9j6f\": pod kube-proxy-j9j6f is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-j9j6f"
	I1101 11:12:59.531550       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j9j6f" node="ha-472819-m04"
	E1101 11:12:59.593960       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-r2qzc\": pod kindnet-r2qzc is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-r2qzc" node="ha-472819-m04"
	E1101 11:12:59.594130       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5cf8f544-dc95-4924-b3df-2e668d7cd5bd(kube-system/kindnet-r2qzc) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-r2qzc"
	E1101 11:12:59.594206       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-r2qzc\": pod kindnet-r2qzc is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kindnet-r2qzc"
	I1101 11:12:59.597504       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-r2qzc" node="ha-472819-m04"
	E1101 11:12:59.662573       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hdmgp\": pod kindnet-hdmgp is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hdmgp" node="ha-472819-m04"
	E1101 11:12:59.662698       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5db403fb-b722-43b5-a7f8-72eb2cb15ab8(kube-system/kindnet-hdmgp) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-hdmgp"
	E1101 11:12:59.662754       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hdmgp\": pod kindnet-hdmgp is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kindnet-hdmgp"
	I1101 11:12:59.672405       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hdmgp" node="ha-472819-m04"
	E1101 11:12:59.723505       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-88sf2\": pod kindnet-88sf2 is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-88sf2" node="ha-472819-m04"
	E1101 11:12:59.723658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 61cdb567-0db6-43a9-b37e-206c4b1e424b(kube-system/kindnet-88sf2) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-88sf2"
	E1101 11:12:59.723717       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-88sf2\": pod kindnet-88sf2 is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kindnet-88sf2"
	I1101 11:12:59.724870       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-88sf2" node="ha-472819-m04"
	E1101 11:12:59.725681       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-chm7z\": pod kube-proxy-chm7z is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-chm7z" node="ha-472819-m04"
	E1101 11:12:59.725836       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 91c0e8b9-9d13-45a7-b93c-cbc34b19bbf2(kube-system/kube-proxy-chm7z) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-chm7z"
	E1101 11:12:59.725898       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-chm7z\": pod kube-proxy-chm7z is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-chm7z"
	I1101 11:12:59.727068       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-chm7z" node="ha-472819-m04"
	E1101 11:19:56.681886       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x67zv\": pod busybox-7b57f96db7-x67zv is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x67zv" node="ha-472819-m04"
	E1101 11:19:56.681946       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 26e53bec-ba78-49cd-9271-6982e344344b(default/busybox-7b57f96db7-x67zv) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x67zv"
	E1101 11:19:56.681966       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x67zv\": pod busybox-7b57f96db7-x67zv is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x67zv"
	I1101 11:19:56.683079       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x67zv" node="ha-472819-m04"
	
	
	==> kubelet <==
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376661    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c010b85-48bd-4004-886f-fbe4e03884a9-xtables-lock\") pod \"kube-proxy-djfvb\" (UID: \"2c010b85-48bd-4004-886f-fbe4e03884a9\") " pod="kube-system/kube-proxy-djfvb"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376718    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c010b85-48bd-4004-886f-fbe4e03884a9-lib-modules\") pod \"kube-proxy-djfvb\" (UID: \"2c010b85-48bd-4004-886f-fbe4e03884a9\") " pod="kube-system/kube-proxy-djfvb"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376737    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcg8\" (UniqueName: \"kubernetes.io/projected/2c010b85-48bd-4004-886f-fbe4e03884a9-kube-api-access-zwcg8\") pod \"kube-proxy-djfvb\" (UID: \"2c010b85-48bd-4004-886f-fbe4e03884a9\") " pod="kube-system/kube-proxy-djfvb"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376798    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abb3d05e-e447-4fe5-8996-26e79d7e2b4d-xtables-lock\") pod \"kindnet-dkhrw\" (UID: \"abb3d05e-e447-4fe5-8996-26e79d7e2b4d\") " pod="kube-system/kindnet-dkhrw"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376817    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56qtk\" (UniqueName: \"kubernetes.io/projected/abb3d05e-e447-4fe5-8996-26e79d7e2b4d-kube-api-access-56qtk\") pod \"kindnet-dkhrw\" (UID: \"abb3d05e-e447-4fe5-8996-26e79d7e2b4d\") " pod="kube-system/kindnet-dkhrw"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376871    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/abb3d05e-e447-4fe5-8996-26e79d7e2b4d-cni-cfg\") pod \"kindnet-dkhrw\" (UID: \"abb3d05e-e447-4fe5-8996-26e79d7e2b4d\") " pod="kube-system/kindnet-dkhrw"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376891    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abb3d05e-e447-4fe5-8996-26e79d7e2b4d-lib-modules\") pod \"kindnet-dkhrw\" (UID: \"abb3d05e-e447-4fe5-8996-26e79d7e2b4d\") " pod="kube-system/kindnet-dkhrw"
	Nov 01 11:09:48 ha-472819 kubelet[1341]: I1101 11:09:48.435774    1341 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 11:09:48 ha-472819 kubelet[1341]: I1101 11:09:48.997603    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dkhrw" podStartSLOduration=1.997582561 podStartE2EDuration="1.997582561s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:09:48.987766505 +0000 UTC m=+7.290989791" watchObservedRunningTime="2025-11-01 11:09:48.997582561 +0000 UTC m=+7.300805838"
	Nov 01 11:09:50 ha-472819 kubelet[1341]: I1101 11:09:50.808988    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-djfvb" podStartSLOduration=3.808968293 podStartE2EDuration="3.808968293s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:09:49.024949256 +0000 UTC m=+7.328172541" watchObservedRunningTime="2025-11-01 11:09:50.808968293 +0000 UTC m=+9.112191562"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.183523    1341 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.306931    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbgz\" (UniqueName: \"kubernetes.io/projected/17503733-2ab6-460c-aa3f-21d031c70abd-kube-api-access-kpbgz\") pod \"coredns-66bc5c9577-bntfw\" (UID: \"17503733-2ab6-460c-aa3f-21d031c70abd\") " pod="kube-system/coredns-66bc5c9577-bntfw"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.307142    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/18119b45-4932-4521-b0e9-e3a73bc6d3b1-tmp\") pod \"storage-provisioner\" (UID: \"18119b45-4932-4521-b0e9-e3a73bc6d3b1\") " pod="kube-system/storage-provisioner"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.307230    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17503733-2ab6-460c-aa3f-21d031c70abd-config-volume\") pod \"coredns-66bc5c9577-bntfw\" (UID: \"17503733-2ab6-460c-aa3f-21d031c70abd\") " pod="kube-system/coredns-66bc5c9577-bntfw"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.307322    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltf68\" (UniqueName: \"kubernetes.io/projected/18119b45-4932-4521-b0e9-e3a73bc6d3b1-kube-api-access-ltf68\") pod \"storage-provisioner\" (UID: \"18119b45-4932-4521-b0e9-e3a73bc6d3b1\") " pod="kube-system/storage-provisioner"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.408072    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6711b0-f71a-421e-922d-eb44266c95a4-config-volume\") pod \"coredns-66bc5c9577-n2tp2\" (UID: \"4b6711b0-f71a-421e-922d-eb44266c95a4\") " pod="kube-system/coredns-66bc5c9577-n2tp2"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.408314    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfm8p\" (UniqueName: \"kubernetes.io/projected/4b6711b0-f71a-421e-922d-eb44266c95a4-kube-api-access-gfm8p\") pod \"coredns-66bc5c9577-n2tp2\" (UID: \"4b6711b0-f71a-421e-922d-eb44266c95a4\") " pod="kube-system/coredns-66bc5c9577-n2tp2"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: W1101 11:10:29.609774    1341 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio-f161ed77d020465b8012f1a83590dec691bc6100c6055b30c7d61753e2d2be2a WatchSource:0}: Error finding container f161ed77d020465b8012f1a83590dec691bc6100c6055b30c7d61753e2d2be2a: Status 404 returned error can't find the container with id f161ed77d020465b8012f1a83590dec691bc6100c6055b30c7d61753e2d2be2a
	Nov 01 11:10:29 ha-472819 kubelet[1341]: W1101 11:10:29.613683    1341 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio-2c45f2568b0e8e33cb1da636920d9b841b29c754a967265ee7a2ff1803ba718d WatchSource:0}: Error finding container 2c45f2568b0e8e33cb1da636920d9b841b29c754a967265ee7a2ff1803ba718d: Status 404 returned error can't find the container with id 2c45f2568b0e8e33cb1da636920d9b841b29c754a967265ee7a2ff1803ba718d
	Nov 01 11:10:30 ha-472819 kubelet[1341]: I1101 11:10:30.116802    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bntfw" podStartSLOduration=43.116782296 podStartE2EDuration="43.116782296s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:10:30.116630032 +0000 UTC m=+48.419853300" watchObservedRunningTime="2025-11-01 11:10:30.116782296 +0000 UTC m=+48.420005573"
	Nov 01 11:10:30 ha-472819 kubelet[1341]: I1101 11:10:30.270805    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.270784831 podStartE2EDuration="43.270784831s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:10:30.247572064 +0000 UTC m=+48.550795357" watchObservedRunningTime="2025-11-01 11:10:30.270784831 +0000 UTC m=+48.574008108"
	Nov 01 11:12:33 ha-472819 kubelet[1341]: I1101 11:12:33.798528    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n2tp2" podStartSLOduration=166.798500202 podStartE2EDuration="2m46.798500202s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:10:30.272373486 +0000 UTC m=+48.575596780" watchObservedRunningTime="2025-11-01 11:12:33.798500202 +0000 UTC m=+172.101723635"
	Nov 01 11:12:33 ha-472819 kubelet[1341]: I1101 11:12:33.919051    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqgfw\" (UniqueName: \"kubernetes.io/projected/3faf7e64-22cf-4338-92ef-39a2978dacb5-kube-api-access-dqgfw\") pod \"busybox-7b57f96db7-lm6r8\" (UID: \"3faf7e64-22cf-4338-92ef-39a2978dacb5\") " pod="default/busybox-7b57f96db7-lm6r8"
	Nov 01 11:12:34 ha-472819 kubelet[1341]: W1101 11:12:34.180348    1341 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio-1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6 WatchSource:0}: Error finding container 1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6: Status 404 returned error can't find the container with id 1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6
	Nov 01 11:12:36 ha-472819 kubelet[1341]: I1101 11:12:36.528629    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-lm6r8" podStartSLOduration=1.504563771 podStartE2EDuration="3.528612432s" podCreationTimestamp="2025-11-01 11:12:33 +0000 UTC" firstStartedPulling="2025-11-01 11:12:34.188294132 +0000 UTC m=+172.491517409" lastFinishedPulling="2025-11-01 11:12:36.212342793 +0000 UTC m=+174.515566070" observedRunningTime="2025-11-01 11:12:36.528156057 +0000 UTC m=+174.831379350" watchObservedRunningTime="2025-11-01 11:12:36.528612432 +0000 UTC m=+174.831835701"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-472819 -n ha-472819
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472819 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (506.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.19106438s)
ha_test.go:309: expected profile "ha-472819" in json of 'profile list' to have "HAppy" status but have "Degraded" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-472819\",\"Status\":\"Degraded\",\"Config\":{\"Name\":\"ha-472819\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-472819\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-devi
ce-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":
false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-472819
helpers_test.go:243: (dbg) docker inspect ha-472819:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c",
	        "Created": "2025-11-01T11:09:20.899997169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564549,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:09:20.960423395Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/hostname",
	        "HostsPath": "/var/lib/docker/containers/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/hosts",
	        "LogPath": "/var/lib/docker/containers/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c-json.log",
	        "Name": "/ha-472819",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-472819:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-472819",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c",
	                "LowerDir": "/var/lib/docker/overlay2/b2b4ec64838dd5e359c9159df7be29d4c92c2974901ee3965fdfb4d3899d9b98-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2b4ec64838dd5e359c9159df7be29d4c92c2974901ee3965fdfb4d3899d9b98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2b4ec64838dd5e359c9159df7be29d4c92c2974901ee3965fdfb4d3899d9b98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2b4ec64838dd5e359c9159df7be29d4c92c2974901ee3965fdfb4d3899d9b98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-472819",
	                "Source": "/var/lib/docker/volumes/ha-472819/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-472819",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-472819",
	                "name.minikube.sigs.k8s.io": "ha-472819",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d06c75db1f88cbad3b99e1d3febd830132bbd4294bd314a091e234e9ed41115",
	            "SandboxKey": "/var/run/docker/netns/2d06c75db1f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-472819": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:12:7f:3f:18:1d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fad877b9a6cbf2fecd3371f8a88631aadb56e394476f97473ad152037f12fe08",
	                    "EndpointID": "3d75015284989adc37a7194f7d4e42693d55ecd110cde90b4ea89049faa60f3e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-472819",
	                        "66de5fe90fef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-472819 -n ha-472819
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 logs -n 25: (1.375189806s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ cp      │ ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt ha-472819:/home/docker/cp-test_ha-472819-m03_ha-472819.txt                       │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819 sudo cat /home/docker/cp-test_ha-472819-m03_ha-472819.txt                                                 │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ cp      │ ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt ha-472819-m02:/home/docker/cp-test_ha-472819-m03_ha-472819-m02.txt               │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m02 sudo cat /home/docker/cp-test_ha-472819-m03_ha-472819-m02.txt                                         │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ cp      │ ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt ha-472819-m04:/home/docker/cp-test_ha-472819-m03_ha-472819-m04.txt               │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:13 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test_ha-472819-m03_ha-472819-m04.txt                                         │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:13 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp testdata/cp-test.txt ha-472819-m04:/home/docker/cp-test.txt                                                             │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3224874569/001/cp-test_ha-472819-m04.txt │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt ha-472819:/home/docker/cp-test_ha-472819-m04_ha-472819.txt                       │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819 sudo cat /home/docker/cp-test_ha-472819-m04_ha-472819.txt                                                 │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt ha-472819-m02:/home/docker/cp-test_ha-472819-m04_ha-472819-m02.txt               │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m02 sudo cat /home/docker/cp-test_ha-472819-m04_ha-472819-m02.txt                                         │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ cp      │ ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt ha-472819-m03:/home/docker/cp-test_ha-472819-m04_ha-472819-m03.txt               │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ ssh     │ ha-472819 ssh -n ha-472819-m03 sudo cat /home/docker/cp-test_ha-472819-m04_ha-472819-m03.txt                                         │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ node    │ ha-472819 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │ 01 Nov 25 11:14 UTC │
	│ node    │ ha-472819 node start m02 --alsologtostderr -v 5                                                                                      │ ha-472819 │ jenkins │ v1.37.0 │ 01 Nov 25 11:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:09:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:09:15.424948  564163 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:09:15.425098  564163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:09:15.425112  564163 out.go:374] Setting ErrFile to fd 2...
	I1101 11:09:15.425118  564163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:09:15.425408  564163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:09:15.425949  564163 out.go:368] Setting JSON to false
	I1101 11:09:15.426851  564163 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10305,"bootTime":1761985051,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:09:15.426921  564163 start.go:143] virtualization:  
	I1101 11:09:15.433995  564163 out.go:179] * [ha-472819] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:09:15.437863  564163 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:09:15.437980  564163 notify.go:221] Checking for updates...
	I1101 11:09:15.444935  564163 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:09:15.448333  564163 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:09:15.451645  564163 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:09:15.454845  564163 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:09:15.458142  564163 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:09:15.461449  564163 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:09:15.483754  564163 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:09:15.483880  564163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:09:15.548573  564163 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 11:09:15.539247568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:09:15.548693  564163 docker.go:319] overlay module found
	I1101 11:09:15.552124  564163 out.go:179] * Using the docker driver based on user configuration
	I1101 11:09:15.555278  564163 start.go:309] selected driver: docker
	I1101 11:09:15.555310  564163 start.go:930] validating driver "docker" against <nil>
	I1101 11:09:15.555327  564163 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:09:15.556133  564163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:09:15.623357  564163 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 11:09:15.613378806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:09:15.623518  564163 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 11:09:15.623751  564163 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:09:15.626927  564163 out.go:179] * Using Docker driver with root privileges
	I1101 11:09:15.629780  564163 cni.go:84] Creating CNI manager for ""
	I1101 11:09:15.629849  564163 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1101 11:09:15.629863  564163 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 11:09:15.629952  564163 start.go:353] cluster config:
	{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1101 11:09:15.634889  564163 out.go:179] * Starting "ha-472819" primary control-plane node in "ha-472819" cluster
	I1101 11:09:15.637899  564163 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:09:15.640856  564163 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:09:15.643596  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:15.643660  564163 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 11:09:15.643675  564163 cache.go:59] Caching tarball of preloaded images
	I1101 11:09:15.643690  564163 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:09:15.643766  564163 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:09:15.643778  564163 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:09:15.644121  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:15.644152  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json: {Name:mk1ba5f23dfb700a1a8e1eba67301a5ea1e7302e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:15.663055  564163 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:09:15.663081  564163 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:09:15.663095  564163 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:09:15.663121  564163 start.go:360] acquireMachinesLock for ha-472819: {Name:mke8efbc22a0e700799c27ca313f26b1261a26ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:09:15.663233  564163 start.go:364] duration metric: took 92.735µs to acquireMachinesLock for "ha-472819"
	I1101 11:09:15.663263  564163 start.go:93] Provisioning new machine with config: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:09:15.663334  564163 start.go:125] createHost starting for "" (driver="docker")
	I1101 11:09:15.666819  564163 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 11:09:15.667057  564163 start.go:159] libmachine.API.Create for "ha-472819" (driver="docker")
	I1101 11:09:15.667098  564163 client.go:173] LocalClient.Create starting
	I1101 11:09:15.667168  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 11:09:15.667207  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:09:15.667225  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:09:15.667289  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 11:09:15.667320  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:09:15.667334  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:09:15.667716  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 11:09:15.683755  564163 cli_runner.go:211] docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 11:09:15.683839  564163 network_create.go:284] running [docker network inspect ha-472819] to gather additional debugging logs...
	I1101 11:09:15.683861  564163 cli_runner.go:164] Run: docker network inspect ha-472819
	W1101 11:09:15.699243  564163 cli_runner.go:211] docker network inspect ha-472819 returned with exit code 1
	I1101 11:09:15.699272  564163 network_create.go:287] error running [docker network inspect ha-472819]: docker network inspect ha-472819: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-472819 not found
	I1101 11:09:15.699286  564163 network_create.go:289] output of [docker network inspect ha-472819]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-472819 not found
	
	** /stderr **
	I1101 11:09:15.699396  564163 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:09:15.715873  564163 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191bd20}
	I1101 11:09:15.715912  564163 network_create.go:124] attempt to create docker network ha-472819 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 11:09:15.715972  564163 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-472819 ha-472819
	I1101 11:09:15.771899  564163 network_create.go:108] docker network ha-472819 192.168.49.0/24 created
	I1101 11:09:15.771935  564163 kic.go:121] calculated static IP "192.168.49.2" for the "ha-472819" container
	I1101 11:09:15.772049  564163 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 11:09:15.788100  564163 cli_runner.go:164] Run: docker volume create ha-472819 --label name.minikube.sigs.k8s.io=ha-472819 --label created_by.minikube.sigs.k8s.io=true
	I1101 11:09:15.806575  564163 oci.go:103] Successfully created a docker volume ha-472819
	I1101 11:09:15.806667  564163 cli_runner.go:164] Run: docker run --rm --name ha-472819-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819 --entrypoint /usr/bin/test -v ha-472819:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 11:09:16.355704  564163 oci.go:107] Successfully prepared a docker volume ha-472819
	I1101 11:09:16.355755  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:16.355776  564163 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 11:09:16.355856  564163 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 11:09:20.827013  564163 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.471121984s)
	I1101 11:09:20.827045  564163 kic.go:203] duration metric: took 4.471266092s to extract preloaded images to volume ...
	W1101 11:09:20.827184  564163 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 11:09:20.827293  564163 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 11:09:20.885378  564163 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472819 --name ha-472819 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472819 --network ha-472819 --ip 192.168.49.2 --volume ha-472819:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 11:09:21.160814  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Running}}
	I1101 11:09:21.184777  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:21.209851  564163 cli_runner.go:164] Run: docker exec ha-472819 stat /var/lib/dpkg/alternatives/iptables
	I1101 11:09:21.261057  564163 oci.go:144] the created container "ha-472819" has a running status.
	I1101 11:09:21.261091  564163 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa...
	I1101 11:09:21.772918  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 11:09:21.772974  564163 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 11:09:21.803408  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:21.833811  564163 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 11:09:21.833836  564163 kic_runner.go:114] Args: [docker exec --privileged ha-472819 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 11:09:21.893208  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:21.920802  564163 machine.go:94] provisionDockerMachine start ...
	I1101 11:09:21.920914  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:21.951309  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:21.951650  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1101 11:09:21.951667  564163 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:09:22.133550  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819
	
	I1101 11:09:22.133576  564163 ubuntu.go:182] provisioning hostname "ha-472819"
	I1101 11:09:22.133648  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:22.153259  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:22.153580  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1101 11:09:22.153595  564163 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-472819 && echo "ha-472819" | sudo tee /etc/hostname
	I1101 11:09:22.321278  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819
	
	I1101 11:09:22.321359  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:22.340531  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:22.340844  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1101 11:09:22.340864  564163 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472819/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:09:22.493831  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:09:22.493861  564163 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:09:22.493890  564163 ubuntu.go:190] setting up certificates
	I1101 11:09:22.493900  564163 provision.go:84] configureAuth start
	I1101 11:09:22.493964  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:09:22.511064  564163 provision.go:143] copyHostCerts
	I1101 11:09:22.511110  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:09:22.511144  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:09:22.511156  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:09:22.511234  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:09:22.511330  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:09:22.511353  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:09:22.511363  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:09:22.511399  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:09:22.511453  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:09:22.511474  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:09:22.511481  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:09:22.511507  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:09:22.511573  564163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.ha-472819 san=[127.0.0.1 192.168.49.2 ha-472819 localhost minikube]
	I1101 11:09:23.107819  564163 provision.go:177] copyRemoteCerts
	I1101 11:09:23.107890  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:09:23.107933  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.125061  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:23.229470  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 11:09:23.229557  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:09:23.247096  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 11:09:23.247159  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1101 11:09:23.264591  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 11:09:23.264659  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 11:09:23.282625  564163 provision.go:87] duration metric: took 788.694673ms to configureAuth
	I1101 11:09:23.282653  564163 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:09:23.282872  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:23.282984  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.300232  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:23.300543  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I1101 11:09:23.300570  564163 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:09:23.560046  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:09:23.560073  564163 machine.go:97] duration metric: took 1.639247203s to provisionDockerMachine
	I1101 11:09:23.560084  564163 client.go:176] duration metric: took 7.892975355s to LocalClient.Create
	I1101 11:09:23.560098  564163 start.go:167] duration metric: took 7.893042884s to libmachine.API.Create "ha-472819"
	I1101 11:09:23.560105  564163 start.go:293] postStartSetup for "ha-472819" (driver="docker")
	I1101 11:09:23.560115  564163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:09:23.560191  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:09:23.560242  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.577371  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:23.681857  564163 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:09:23.685148  564163 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:09:23.685219  564163 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:09:23.685238  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:09:23.685303  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:09:23.685388  564163 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:09:23.685400  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /etc/ssl/certs/5347202.pem
	I1101 11:09:23.685527  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:09:23.692811  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:09:23.710406  564163 start.go:296] duration metric: took 150.285888ms for postStartSetup
	I1101 11:09:23.710773  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:09:23.727204  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:23.727491  564163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:09:23.727555  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.744722  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:23.847065  564163 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:09:23.851940  564163 start.go:128] duration metric: took 8.188589867s to createHost
	I1101 11:09:23.851964  564163 start.go:83] releasing machines lock for "ha-472819", held for 8.188717483s
	I1101 11:09:23.852042  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:09:23.868795  564163 ssh_runner.go:195] Run: cat /version.json
	I1101 11:09:23.868846  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.869109  564163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:09:23.869169  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:23.887234  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:23.888188  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:24.071485  564163 ssh_runner.go:195] Run: systemctl --version
	I1101 11:09:24.078091  564163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:09:24.116837  564163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:09:24.121129  564163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:09:24.121205  564163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:09:24.150584  564163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 11:09:24.150648  564163 start.go:496] detecting cgroup driver to use...
	I1101 11:09:24.150698  564163 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:09:24.150758  564163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:09:24.168976  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:09:24.181753  564163 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:09:24.181845  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:09:24.198141  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:09:24.216855  564163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:09:24.341682  564163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:09:24.476025  564163 docker.go:234] disabling docker service ...
	I1101 11:09:24.476133  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:09:24.497860  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:09:24.511210  564163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:09:24.628871  564163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:09:24.751818  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:09:24.765436  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:09:24.779636  564163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:09:24.779754  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.788816  564163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:09:24.788939  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.797807  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.806546  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.815271  564163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:09:24.823860  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.832662  564163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.846440  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:24.855087  564163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:09:24.862971  564163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:09:24.870471  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:09:24.977089  564163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:09:25.118783  564163 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:09:25.118897  564163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:09:25.122853  564163 start.go:564] Will wait 60s for crictl version
	I1101 11:09:25.122960  564163 ssh_runner.go:195] Run: which crictl
	I1101 11:09:25.126642  564163 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:09:25.155501  564163 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:09:25.155650  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:09:25.191623  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:09:25.225393  564163 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:09:25.228286  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:09:25.249910  564163 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 11:09:25.253806  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:09:25.264172  564163 kubeadm.go:884] updating cluster {Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:09:25.264297  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:25.264354  564163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:09:25.296986  564163 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:09:25.297011  564163 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:09:25.297070  564163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:09:25.321789  564163 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:09:25.321813  564163 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:09:25.321821  564163 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 11:09:25.321912  564163 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-472819 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:09:25.321999  564163 ssh_runner.go:195] Run: crio config
	I1101 11:09:25.380528  564163 cni.go:84] Creating CNI manager for ""
	I1101 11:09:25.380597  564163 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1101 11:09:25.380638  564163 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:09:25.380700  564163 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-472819 NodeName:ha-472819 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:09:25.380877  564163 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-472819"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:09:25.380929  564163 kube-vip.go:115] generating kube-vip config ...
	I1101 11:09:25.381014  564163 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 11:09:25.393108  564163 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:09:25.393218  564163 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1101 11:09:25.393286  564163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:09:25.401262  564163 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:09:25.401379  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1101 11:09:25.409393  564163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1101 11:09:25.422954  564163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:09:25.436690  564163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1101 11:09:25.449600  564163 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1101 11:09:25.462548  564163 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 11:09:25.466270  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:09:25.475862  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:09:25.601826  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:09:25.617309  564163 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819 for IP: 192.168.49.2
	I1101 11:09:25.617340  564163 certs.go:195] generating shared ca certs ...
	I1101 11:09:25.617373  564163 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:25.617566  564163 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:09:25.617633  564163 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:09:25.617663  564163 certs.go:257] generating profile certs ...
	I1101 11:09:25.617789  564163 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key
	I1101 11:09:25.617814  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt with IP's: []
	I1101 11:09:25.970419  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt ...
	I1101 11:09:25.970452  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt: {Name:mk2f41d01137bc613681198561d475471e9b313e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:25.970692  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key ...
	I1101 11:09:25.970711  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key: {Name:mkb2f404cf11e9ff6d4974de312113eaa2c2831e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:25.970817  564163 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.985e35c4
	I1101 11:09:25.970839  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.985e35c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1101 11:09:26.521666  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.985e35c4 ...
	I1101 11:09:26.521708  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.985e35c4: {Name:mkb42da9edce8a3a5d96bc6e579423c0b2c406c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:26.521948  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.985e35c4 ...
	I1101 11:09:26.521970  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.985e35c4: {Name:mkdb988833aef0d64a2a617d4983ef55d86bf204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:26.522099  564163 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.985e35c4 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt
	I1101 11:09:26.522188  564163 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.985e35c4 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key
	I1101 11:09:26.522253  564163 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key
	I1101 11:09:26.522271  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt with IP's: []
	I1101 11:09:26.790462  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt ...
	I1101 11:09:26.790530  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt: {Name:mk06b0a4635c2902eec5ac65c88e17411a71c735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:26.790721  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key ...
	I1101 11:09:26.790734  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key: {Name:mkba7a7f05214b82bdfe102379d16ce7f31a4fa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:26.790827  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 11:09:26.790855  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 11:09:26.790874  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 11:09:26.790886  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 11:09:26.790901  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 11:09:26.790913  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 11:09:26.790928  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 11:09:26.790938  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 11:09:26.790995  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:09:26.791032  564163 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:09:26.791044  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:09:26.791066  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:09:26.791099  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:09:26.791128  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:09:26.791177  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:09:26.791207  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:09:26.791225  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem -> /usr/share/ca-certificates/534720.pem
	I1101 11:09:26.791237  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /usr/share/ca-certificates/5347202.pem
	I1101 11:09:26.791808  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:09:26.811389  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:09:26.830648  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:09:26.848734  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:09:26.866857  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 11:09:26.884898  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:09:26.902647  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:09:26.923006  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:09:26.940928  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:09:26.958978  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:09:26.977277  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:09:26.995250  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:09:27.009661  564163 ssh_runner.go:195] Run: openssl version
	I1101 11:09:27.016856  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:09:27.026824  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:09:27.030812  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:09:27.030886  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:09:27.071933  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:09:27.080875  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:09:27.089342  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:09:27.093244  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:09:27.093330  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:09:27.135042  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:09:27.143864  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:09:27.152568  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:09:27.157253  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:09:27.157336  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:09:27.203904  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:09:27.213041  564163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:09:27.216525  564163 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:09:27.216581  564163 kubeadm.go:401] StartCluster: {Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:09:27.216654  564163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:09:27.216709  564163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:09:27.244145  564163 cri.go:89] found id: ""
	I1101 11:09:27.244250  564163 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:09:27.252322  564163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:09:27.260600  564163 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 11:09:27.260684  564163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:09:27.268298  564163 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:09:27.268321  564163 kubeadm.go:158] found existing configuration files:
	
	I1101 11:09:27.268382  564163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:09:27.276280  564163 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:09:27.276376  564163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:09:27.283648  564163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:09:27.291436  564163 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:09:27.291503  564163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:09:27.298997  564163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:09:27.306741  564163 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:09:27.306823  564163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:09:27.314212  564163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:09:27.322170  564163 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:09:27.322240  564163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:09:27.329643  564163 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 11:09:27.373103  564163 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 11:09:27.373161  564163 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 11:09:27.398676  564163 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 11:09:27.398752  564163 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 11:09:27.398789  564163 kubeadm.go:319] OS: Linux
	I1101 11:09:27.398836  564163 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 11:09:27.398892  564163 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 11:09:27.398942  564163 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 11:09:27.398993  564163 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 11:09:27.399043  564163 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 11:09:27.399093  564163 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 11:09:27.399142  564163 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 11:09:27.399193  564163 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 11:09:27.399241  564163 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 11:09:27.468424  564163 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 11:09:27.468540  564163 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 11:09:27.468642  564163 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 11:09:27.479009  564163 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 11:09:27.485386  564163 out.go:252]   - Generating certificates and keys ...
	I1101 11:09:27.485566  564163 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 11:09:27.485677  564163 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 11:09:28.146819  564163 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 11:09:29.237838  564163 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 11:09:29.375619  564163 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 11:09:29.669873  564163 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 11:09:30.116711  564163 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 11:09:30.116850  564163 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [ha-472819 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 11:09:30.327488  564163 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 11:09:30.327663  564163 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [ha-472819 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 11:09:30.738429  564163 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 11:09:30.840698  564163 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 11:09:31.445175  564163 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 11:09:31.445453  564163 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 11:09:31.867483  564163 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 11:09:32.130887  564163 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 11:09:32.613555  564163 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 11:09:33.124550  564163 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 11:09:33.371630  564163 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 11:09:33.372352  564163 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 11:09:33.374952  564163 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 11:09:33.378411  564163 out.go:252]   - Booting up control plane ...
	I1101 11:09:33.378530  564163 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 11:09:33.378616  564163 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 11:09:33.379704  564163 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 11:09:33.398248  564163 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 11:09:33.398586  564163 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 11:09:33.406765  564163 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 11:09:33.407161  564163 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 11:09:33.407368  564163 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 11:09:33.546255  564163 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 11:09:33.546390  564163 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 11:09:34.539783  564163 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000858349s
	I1101 11:09:34.543235  564163 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 11:09:34.543333  564163 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 11:09:34.543583  564163 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 11:09:34.543675  564163 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 11:09:37.986079  564163 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.442309072s
	I1101 11:09:39.112325  564163 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.569041285s
	I1101 11:09:41.044837  564163 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501498887s
	I1101 11:09:41.064487  564163 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 11:09:41.079898  564163 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 11:09:41.101331  564163 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 11:09:41.101559  564163 kubeadm.go:319] [mark-control-plane] Marking the node ha-472819 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 11:09:41.119154  564163 kubeadm.go:319] [bootstrap-token] Using token: btb653.26s7hd24i40lgq1y
	I1101 11:09:41.122135  564163 out.go:252]   - Configuring RBAC rules ...
	I1101 11:09:41.122261  564163 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 11:09:41.131456  564163 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 11:09:41.147503  564163 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 11:09:41.154473  564163 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 11:09:41.161216  564163 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 11:09:41.167042  564163 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 11:09:41.452979  564163 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 11:09:41.879928  564163 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 11:09:42.452390  564163 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 11:09:42.453615  564163 kubeadm.go:319] 
	I1101 11:09:42.453715  564163 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 11:09:42.453728  564163 kubeadm.go:319] 
	I1101 11:09:42.453805  564163 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 11:09:42.453812  564163 kubeadm.go:319] 
	I1101 11:09:42.453838  564163 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 11:09:42.453902  564163 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 11:09:42.453956  564163 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 11:09:42.453964  564163 kubeadm.go:319] 
	I1101 11:09:42.454018  564163 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 11:09:42.454026  564163 kubeadm.go:319] 
	I1101 11:09:42.454074  564163 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 11:09:42.454082  564163 kubeadm.go:319] 
	I1101 11:09:42.454134  564163 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 11:09:42.454210  564163 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 11:09:42.454282  564163 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 11:09:42.454290  564163 kubeadm.go:319] 
	I1101 11:09:42.454374  564163 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 11:09:42.454453  564163 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 11:09:42.454464  564163 kubeadm.go:319] 
	I1101 11:09:42.454806  564163 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token btb653.26s7hd24i40lgq1y \
	I1101 11:09:42.454943  564163 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 11:09:42.454976  564163 kubeadm.go:319] 	--control-plane 
	I1101 11:09:42.454986  564163 kubeadm.go:319] 
	I1101 11:09:42.455099  564163 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 11:09:42.455108  564163 kubeadm.go:319] 
	I1101 11:09:42.455198  564163 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token btb653.26s7hd24i40lgq1y \
	I1101 11:09:42.455312  564163 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 11:09:42.459886  564163 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 11:09:42.460131  564163 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 11:09:42.460249  564163 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 11:09:42.460270  564163 cni.go:84] Creating CNI manager for ""
	I1101 11:09:42.460281  564163 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1101 11:09:42.463474  564163 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 11:09:42.466360  564163 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 11:09:42.471008  564163 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 11:09:42.471032  564163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 11:09:42.485436  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 11:09:42.779960  564163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:09:42.780051  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:42.780097  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472819 minikube.k8s.io/updated_at=2025_11_01T11_09_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=ha-472819 minikube.k8s.io/primary=true
	I1101 11:09:42.946730  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:42.946790  564163 ops.go:34] apiserver oom_adj: -16
	I1101 11:09:43.447305  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:43.947632  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:44.447625  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:44.946864  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:45.447661  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:45.947118  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:46.446849  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:46.947087  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:09:47.057163  564163 kubeadm.go:1114] duration metric: took 4.277184864s to wait for elevateKubeSystemPrivileges
	I1101 11:09:47.057195  564163 kubeadm.go:403] duration metric: took 19.840620587s to StartCluster
	I1101 11:09:47.057212  564163 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:47.057272  564163 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:09:47.057994  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:09:47.058207  564163 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:09:47.058246  564163 start.go:242] waiting for startup goroutines ...
	I1101 11:09:47.058255  564163 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:09:47.058318  564163 addons.go:70] Setting storage-provisioner=true in profile "ha-472819"
	I1101 11:09:47.058338  564163 addons.go:239] Setting addon storage-provisioner=true in "ha-472819"
	I1101 11:09:47.058365  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:09:47.058832  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:47.058997  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 11:09:47.059257  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:47.059298  564163 addons.go:70] Setting default-storageclass=true in profile "ha-472819"
	I1101 11:09:47.059315  564163 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "ha-472819"
	I1101 11:09:47.059542  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:47.094113  564163 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:09:47.094672  564163 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 11:09:47.094686  564163 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 11:09:47.094691  564163 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 11:09:47.094696  564163 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 11:09:47.094700  564163 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 11:09:47.095052  564163 addons.go:239] Setting addon default-storageclass=true in "ha-472819"
	I1101 11:09:47.099545  564163 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 11:09:47.099640  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:09:47.100113  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:09:47.107324  564163 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:09:47.110259  564163 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:09:47.110280  564163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:09:47.110343  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:47.139417  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:47.148358  564163 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:09:47.148379  564163 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:09:47.148446  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:09:47.178909  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:09:47.268521  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 11:09:47.290532  564163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:09:47.506464  564163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:09:47.743529  564163 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 11:09:47.978148  564163 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 11:09:47.981058  564163 addons.go:515] duration metric: took 922.780782ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 11:09:47.981103  564163 start.go:247] waiting for cluster config update ...
	I1101 11:09:47.981118  564163 start.go:256] writing updated cluster config ...
	I1101 11:09:47.984275  564163 out.go:203] 
	I1101 11:09:47.987383  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:47.987471  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:47.990706  564163 out.go:179] * Starting "ha-472819-m02" control-plane node in "ha-472819" cluster
	I1101 11:09:47.993384  564163 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:09:47.996301  564163 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:09:47.999988  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:48.000022  564163 cache.go:59] Caching tarball of preloaded images
	I1101 11:09:48.000067  564163 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:09:48.000117  564163 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:09:48.000134  564163 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:09:48.000258  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:48.023925  564163 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:09:48.023946  564163 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:09:48.023960  564163 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:09:48.023985  564163 start.go:360] acquireMachinesLock for ha-472819-m02: {Name:mkd9b09c2f5958eb6cf9785ab2b809fc6e14102e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:09:48.024101  564163 start.go:364] duration metric: took 98.758µs to acquireMachinesLock for "ha-472819-m02"
	I1101 11:09:48.024126  564163 start.go:93] Provisioning new machine with config: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:09:48.024211  564163 start.go:125] createHost starting for "m02" (driver="docker")
	I1101 11:09:48.027535  564163 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 11:09:48.027682  564163 start.go:159] libmachine.API.Create for "ha-472819" (driver="docker")
	I1101 11:09:48.027709  564163 client.go:173] LocalClient.Create starting
	I1101 11:09:48.027787  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 11:09:48.027824  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:09:48.027850  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:09:48.027911  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 11:09:48.027933  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:09:48.027947  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:09:48.028229  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:09:48.045847  564163 network_create.go:77] Found existing network {name:ha-472819 subnet:0x4001e9bf20 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1101 11:09:48.045902  564163 kic.go:121] calculated static IP "192.168.49.3" for the "ha-472819-m02" container
	I1101 11:09:48.045995  564163 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 11:09:48.063585  564163 cli_runner.go:164] Run: docker volume create ha-472819-m02 --label name.minikube.sigs.k8s.io=ha-472819-m02 --label created_by.minikube.sigs.k8s.io=true
	I1101 11:09:48.081625  564163 oci.go:103] Successfully created a docker volume ha-472819-m02
	I1101 11:09:48.081759  564163 cli_runner.go:164] Run: docker run --rm --name ha-472819-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819-m02 --entrypoint /usr/bin/test -v ha-472819-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 11:09:48.719367  564163 oci.go:107] Successfully prepared a docker volume ha-472819-m02
	I1101 11:09:48.719403  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:09:48.719424  564163 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 11:09:48.719499  564163 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 11:09:53.148805  564163 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.429267175s)
	I1101 11:09:53.148842  564163 kic.go:203] duration metric: took 4.429414598s to extract preloaded images to volume ...
	W1101 11:09:53.148976  564163 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 11:09:53.149102  564163 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 11:09:53.205412  564163 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472819-m02 --name ha-472819-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472819-m02 --network ha-472819 --ip 192.168.49.3 --volume ha-472819-m02:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 11:09:53.540381  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Running}}
	I1101 11:09:53.560268  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:09:53.587014  564163 cli_runner.go:164] Run: docker exec ha-472819-m02 stat /var/lib/dpkg/alternatives/iptables
	I1101 11:09:53.644476  564163 oci.go:144] the created container "ha-472819-m02" has a running status.
	I1101 11:09:53.644505  564163 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa...
	I1101 11:09:53.818753  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 11:09:53.818798  564163 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 11:09:53.843513  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:09:53.867129  564163 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 11:09:53.867148  564163 kic_runner.go:114] Args: [docker exec --privileged ha-472819-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 11:09:53.915191  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:09:53.944431  564163 machine.go:94] provisionDockerMachine start ...
	I1101 11:09:53.944522  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:53.971708  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:53.972031  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1101 11:09:53.972040  564163 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:09:53.972718  564163 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 11:09:57.125375  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m02
	
	I1101 11:09:57.125400  564163 ubuntu.go:182] provisioning hostname "ha-472819-m02"
	I1101 11:09:57.125484  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:57.151306  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:57.151617  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1101 11:09:57.151628  564163 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-472819-m02 && echo "ha-472819-m02" | sudo tee /etc/hostname
	I1101 11:09:57.311009  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m02
	
	I1101 11:09:57.311162  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:57.329599  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:57.330232  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1101 11:09:57.330258  564163 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472819-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472819-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472819-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:09:57.478017  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:09:57.478047  564163 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:09:57.478067  564163 ubuntu.go:190] setting up certificates
	I1101 11:09:57.478078  564163 provision.go:84] configureAuth start
	I1101 11:09:57.478139  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:09:57.496426  564163 provision.go:143] copyHostCerts
	I1101 11:09:57.496480  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:09:57.496514  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:09:57.496527  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:09:57.496611  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:09:57.496704  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:09:57.496727  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:09:57.496735  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:09:57.496763  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:09:57.496816  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:09:57.496837  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:09:57.496845  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:09:57.496872  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:09:57.496927  564163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.ha-472819-m02 san=[127.0.0.1 192.168.49.3 ha-472819-m02 localhost minikube]
	I1101 11:09:58.109118  564163 provision.go:177] copyRemoteCerts
	I1101 11:09:58.109211  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:09:58.109257  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.129970  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:58.239761  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 11:09:58.239822  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:09:58.258364  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 11:09:58.258429  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:09:58.277456  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 11:09:58.277565  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 11:09:58.296719  564163 provision.go:87] duration metric: took 818.627177ms to configureAuth
	I1101 11:09:58.296743  564163 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:09:58.296931  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:09:58.297053  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.314394  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:09:58.314702  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1101 11:09:58.314723  564163 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:09:58.587462  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:09:58.587485  564163 machine.go:97] duration metric: took 4.643032549s to provisionDockerMachine
	I1101 11:09:58.587494  564163 client.go:176] duration metric: took 10.55977608s to LocalClient.Create
	I1101 11:09:58.587508  564163 start.go:167] duration metric: took 10.559829168s to libmachine.API.Create "ha-472819"
	I1101 11:09:58.587515  564163 start.go:293] postStartSetup for "ha-472819-m02" (driver="docker")
	I1101 11:09:58.587525  564163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:09:58.587591  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:09:58.587640  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.607665  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:58.713788  564163 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:09:58.716917  564163 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:09:58.716946  564163 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:09:58.716959  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:09:58.717015  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:09:58.717094  564163 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:09:58.717106  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /etc/ssl/certs/5347202.pem
	I1101 11:09:58.717202  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:09:58.725530  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:09:58.744321  564163 start.go:296] duration metric: took 156.786029ms for postStartSetup
	I1101 11:09:58.744753  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:09:58.763769  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:09:58.764152  564163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:09:58.764219  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.781753  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:58.882710  564163 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:09:58.887489  564163 start.go:128] duration metric: took 10.863262048s to createHost
	I1101 11:09:58.887514  564163 start.go:83] releasing machines lock for "ha-472819-m02", held for 10.863404934s
	I1101 11:09:58.887586  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m02
	I1101 11:09:58.910138  564163 out.go:179] * Found network options:
	I1101 11:09:58.913143  564163 out.go:179]   - NO_PROXY=192.168.49.2
	W1101 11:09:58.916072  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 11:09:58.916119  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	I1101 11:09:58.916188  564163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:09:58.916238  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.916502  564163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:09:58.916556  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m02
	I1101 11:09:58.940313  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:58.958881  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m02/id_rsa Username:docker}
	I1101 11:09:59.095364  564163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:09:59.155074  564163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:09:59.155153  564163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:09:59.189592  564163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 11:09:59.189654  564163 start.go:496] detecting cgroup driver to use...
	I1101 11:09:59.189741  564163 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:09:59.189821  564163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:09:59.208490  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:09:59.221089  564163 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:09:59.221155  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:09:59.239119  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:09:59.259726  564163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:09:59.391794  564163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:09:59.526999  564163 docker.go:234] disabling docker service ...
	I1101 11:09:59.527094  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:09:59.551165  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:09:59.564683  564163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:09:59.689243  564163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:09:59.814478  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:09:59.830425  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:09:59.846686  564163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:09:59.846772  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.858345  564163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:09:59.858425  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.867679  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.876755  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.885935  564163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:09:59.894400  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.903514  564163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.917030  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:09:59.932785  564163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:09:59.941248  564163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:09:59.948963  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:00.218274  564163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:10:00.467897  564163 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:10:00.468023  564163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:10:00.473180  564163 start.go:564] Will wait 60s for crictl version
	I1101 11:10:00.473312  564163 ssh_runner.go:195] Run: which crictl
	I1101 11:10:00.478332  564163 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:10:00.530922  564163 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:10:00.531117  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:10:00.572083  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:10:00.626266  564163 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:10:00.630488  564163 out.go:179]   - env NO_PROXY=192.168.49.2
	I1101 11:10:00.640119  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:10:00.660609  564163 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 11:10:00.667050  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:10:00.680675  564163 mustload.go:66] Loading cluster: ha-472819
	I1101 11:10:00.680904  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:00.681185  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:10:00.703209  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:10:00.703529  564163 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819 for IP: 192.168.49.3
	I1101 11:10:00.703549  564163 certs.go:195] generating shared ca certs ...
	I1101 11:10:00.703567  564163 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:00.703707  564163 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:10:00.703893  564163 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:10:00.703917  564163 certs.go:257] generating profile certs ...
	I1101 11:10:00.704037  564163 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key
	I1101 11:10:00.704075  564163 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.4c464717
	I1101 11:10:00.704096  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.4c464717 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1101 11:10:00.826368  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.4c464717 ...
	I1101 11:10:00.826415  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.4c464717: {Name:mk86b52ad2762405e19fd51a0df3aa2cea75b088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:00.826658  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.4c464717 ...
	I1101 11:10:00.826682  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.4c464717: {Name:mk7b72151895b70df48e1e5a1aaae8ffe13ae0ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:00.826797  564163 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.4c464717 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt
	I1101 11:10:00.826971  564163 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.4c464717 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key
	I1101 11:10:00.827168  564163 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key
	I1101 11:10:00.827191  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 11:10:00.827207  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 11:10:00.827220  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 11:10:00.827235  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 11:10:00.827247  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 11:10:00.827260  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 11:10:00.827272  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 11:10:00.827284  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 11:10:00.827342  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:10:00.827371  564163 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:10:00.827380  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:10:00.827406  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:10:00.827430  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:10:00.827453  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:10:00.827502  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:10:00.827540  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:00.827559  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem -> /usr/share/ca-certificates/534720.pem
	I1101 11:10:00.827577  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /usr/share/ca-certificates/5347202.pem
	I1101 11:10:00.827662  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:10:00.847637  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:10:00.950109  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 11:10:00.954551  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 11:10:00.963598  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 11:10:00.967474  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 11:10:00.977011  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 11:10:00.981033  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 11:10:00.990888  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 11:10:00.995065  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 11:10:01.006340  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 11:10:01.011555  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 11:10:01.021211  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 11:10:01.025375  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 11:10:01.035197  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:10:01.059524  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:10:01.081340  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:10:01.103476  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:10:01.127247  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 11:10:01.151362  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:10:01.172198  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:10:01.192991  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:10:01.213776  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:10:01.236349  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:10:01.258068  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:10:01.278528  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 11:10:01.293835  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 11:10:01.309512  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 11:10:01.326059  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 11:10:01.340597  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 11:10:01.356408  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 11:10:01.370667  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 11:10:01.385223  564163 ssh_runner.go:195] Run: openssl version
	I1101 11:10:01.391999  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:10:01.401466  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:01.405725  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:01.405836  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:01.448102  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:10:01.458363  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:10:01.467703  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:10:01.471965  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:10:01.472078  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:10:01.516292  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:10:01.525777  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:10:01.535004  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:10:01.539426  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:10:01.539524  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:10:01.581188  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:10:01.590430  564163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:10:01.594810  564163 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:10:01.594902  564163 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1101 11:10:01.595058  564163 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-472819-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:10:01.595090  564163 kube-vip.go:115] generating kube-vip config ...
	I1101 11:10:01.595158  564163 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 11:10:01.610580  564163 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:10:01.610640  564163 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 11:10:01.610708  564163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:10:01.619440  564163 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:10:01.619523  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 11:10:01.628852  564163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 11:10:01.645438  564163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:10:01.660320  564163 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 11:10:01.674752  564163 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 11:10:01.678687  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:10:01.689054  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:01.817597  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:10:01.837667  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:10:01.838014  564163 start.go:318] joinCluster: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:10:01.838163  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1101 11:10:01.838226  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:10:01.859436  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:10:02.042488  564163 start.go:344] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:10:02.042580  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nj59cf.7iwf9knb5mhb6h6v --discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-472819-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I1101 11:10:18.171255  564163 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nj59cf.7iwf9knb5mhb6h6v --discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-472819-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (16.128647108s)
	I1101 11:10:18.171325  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1101 11:10:18.661883  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472819-m02 minikube.k8s.io/updated_at=2025_11_01T11_10_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=ha-472819 minikube.k8s.io/primary=false
	I1101 11:10:18.772634  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472819-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1101 11:10:18.882683  564163 start.go:320] duration metric: took 17.044664183s to joinCluster
	I1101 11:10:18.882755  564163 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:10:18.883009  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:18.885639  564163 out.go:179] * Verifying Kubernetes components...
	I1101 11:10:18.888568  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:19.048903  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:10:19.065355  564163 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 11:10:19.065459  564163 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 11:10:19.065823  564163 node_ready.go:35] waiting up to 6m0s for node "ha-472819-m02" to be "Ready" ...
	W1101 11:10:21.069930  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:23.070173  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:25.070653  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:27.070920  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:29.570450  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:32.070088  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:34.570636  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:37.069136  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:39.069864  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:41.070271  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:43.570410  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:46.070044  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:48.570002  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:51.069255  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:53.069352  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:55.071799  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:57.570240  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:10:59.570964  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	W1101 11:11:02.069796  564163 node_ready.go:57] node "ha-472819-m02" has "Ready":"False" status (will retry)
	I1101 11:11:02.573138  564163 node_ready.go:49] node "ha-472819-m02" is "Ready"
	I1101 11:11:02.573173  564163 node_ready.go:38] duration metric: took 43.507323629s for node "ha-472819-m02" to be "Ready" ...
	I1101 11:11:02.573188  564163 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:11:02.573247  564163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:11:02.592905  564163 api_server.go:72] duration metric: took 43.710116122s to wait for apiserver process to appear ...
	I1101 11:11:02.592930  564163 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:11:02.592950  564163 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 11:11:02.601671  564163 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 11:11:02.602744  564163 api_server.go:141] control plane version: v1.34.1
	I1101 11:11:02.602768  564163 api_server.go:131] duration metric: took 9.830793ms to wait for apiserver health ...
	I1101 11:11:02.602776  564163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:11:02.608318  564163 system_pods.go:59] 17 kube-system pods found
	I1101 11:11:02.608348  564163 system_pods.go:61] "coredns-66bc5c9577-bntfw" [17503733-2ab6-460c-aa3f-21d031c70abd] Running
	I1101 11:11:02.608355  564163 system_pods.go:61] "coredns-66bc5c9577-n2tp2" [4b6711b0-f71a-421e-922d-eb44266c95a4] Running
	I1101 11:11:02.608360  564163 system_pods.go:61] "etcd-ha-472819" [6807b695-9ca8-4691-8aac-87ff5cdaca11] Running
	I1101 11:11:02.608364  564163 system_pods.go:61] "etcd-ha-472819-m02" [3cef3cc2-cf4e-4445-a55c-ce64fd2279ff] Running
	I1101 11:11:02.608368  564163 system_pods.go:61] "kindnet-cw2kt" [70effae0-c034-4a35-b3d9-3e092c079100] Running
	I1101 11:11:02.608372  564163 system_pods.go:61] "kindnet-dkhrw" [abb3d05e-e447-4fe5-8996-26e79d7e2b4d] Running
	I1101 11:11:02.608376  564163 system_pods.go:61] "kube-apiserver-ha-472819" [a65e9eca-1f17-4ff9-b4d0-2b26612bc846] Running
	I1101 11:11:02.608380  564163 system_pods.go:61] "kube-apiserver-ha-472819-m02" [c94a478e-4714-4590-8c91-17468898125c] Running
	I1101 11:11:02.608385  564163 system_pods.go:61] "kube-controller-manager-ha-472819" [e6236069-2227-4783-b8e3-6df90e52e82c] Running
	I1101 11:11:02.608389  564163 system_pods.go:61] "kube-controller-manager-ha-472819-m02" [f5e22b4d-d7c1-47b0-a044-4007e77d6ebc] Running
	I1101 11:11:02.608398  564163 system_pods.go:61] "kube-proxy-47prj" [16f8f4f3-8267-4ce3-997b-1f4afb0f5104] Running
	I1101 11:11:02.608402  564163 system_pods.go:61] "kube-proxy-djfvb" [2c010b85-48bd-4004-886f-fbe4e03884a9] Running
	I1101 11:11:02.608407  564163 system_pods.go:61] "kube-scheduler-ha-472819" [78ac9fa6-2686-404f-a977-d7710745150b] Running
	I1101 11:11:02.608411  564163 system_pods.go:61] "kube-scheduler-ha-472819-m02" [31b58b00-ca07-42ad-a9a7-20da16f0a251] Running
	I1101 11:11:02.608415  564163 system_pods.go:61] "kube-vip-ha-472819" [0e1f82b1-9039-49f8-b83f-8c40ab9ec44f] Running
	I1101 11:11:02.608419  564163 system_pods.go:61] "kube-vip-ha-472819-m02" [8964dc5d-7184-43bf-a1bd-0f9b261bb9df] Running
	I1101 11:11:02.608424  564163 system_pods.go:61] "storage-provisioner" [18119b45-4932-4521-b0e9-e3a73bc6d3b1] Running
	I1101 11:11:02.608433  564163 system_pods.go:74] duration metric: took 5.651361ms to wait for pod list to return data ...
	I1101 11:11:02.608449  564163 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:11:02.615780  564163 default_sa.go:45] found service account: "default"
	I1101 11:11:02.615809  564163 default_sa.go:55] duration metric: took 7.352591ms for default service account to be created ...
	I1101 11:11:02.615820  564163 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:11:02.620640  564163 system_pods.go:86] 17 kube-system pods found
	I1101 11:11:02.620675  564163 system_pods.go:89] "coredns-66bc5c9577-bntfw" [17503733-2ab6-460c-aa3f-21d031c70abd] Running
	I1101 11:11:02.620683  564163 system_pods.go:89] "coredns-66bc5c9577-n2tp2" [4b6711b0-f71a-421e-922d-eb44266c95a4] Running
	I1101 11:11:02.620687  564163 system_pods.go:89] "etcd-ha-472819" [6807b695-9ca8-4691-8aac-87ff5cdaca11] Running
	I1101 11:11:02.620691  564163 system_pods.go:89] "etcd-ha-472819-m02" [3cef3cc2-cf4e-4445-a55c-ce64fd2279ff] Running
	I1101 11:11:02.620695  564163 system_pods.go:89] "kindnet-cw2kt" [70effae0-c034-4a35-b3d9-3e092c079100] Running
	I1101 11:11:02.620698  564163 system_pods.go:89] "kindnet-dkhrw" [abb3d05e-e447-4fe5-8996-26e79d7e2b4d] Running
	I1101 11:11:02.620705  564163 system_pods.go:89] "kube-apiserver-ha-472819" [a65e9eca-1f17-4ff9-b4d0-2b26612bc846] Running
	I1101 11:11:02.620710  564163 system_pods.go:89] "kube-apiserver-ha-472819-m02" [c94a478e-4714-4590-8c91-17468898125c] Running
	I1101 11:11:02.620714  564163 system_pods.go:89] "kube-controller-manager-ha-472819" [e6236069-2227-4783-b8e3-6df90e52e82c] Running
	I1101 11:11:02.620718  564163 system_pods.go:89] "kube-controller-manager-ha-472819-m02" [f5e22b4d-d7c1-47b0-a044-4007e77d6ebc] Running
	I1101 11:11:02.620722  564163 system_pods.go:89] "kube-proxy-47prj" [16f8f4f3-8267-4ce3-997b-1f4afb0f5104] Running
	I1101 11:11:02.620726  564163 system_pods.go:89] "kube-proxy-djfvb" [2c010b85-48bd-4004-886f-fbe4e03884a9] Running
	I1101 11:11:02.620731  564163 system_pods.go:89] "kube-scheduler-ha-472819" [78ac9fa6-2686-404f-a977-d7710745150b] Running
	I1101 11:11:02.620740  564163 system_pods.go:89] "kube-scheduler-ha-472819-m02" [31b58b00-ca07-42ad-a9a7-20da16f0a251] Running
	I1101 11:11:02.620745  564163 system_pods.go:89] "kube-vip-ha-472819" [0e1f82b1-9039-49f8-b83f-8c40ab9ec44f] Running
	I1101 11:11:02.620779  564163 system_pods.go:89] "kube-vip-ha-472819-m02" [8964dc5d-7184-43bf-a1bd-0f9b261bb9df] Running
	I1101 11:11:02.620787  564163 system_pods.go:89] "storage-provisioner" [18119b45-4932-4521-b0e9-e3a73bc6d3b1] Running
	I1101 11:11:02.620796  564163 system_pods.go:126] duration metric: took 4.970499ms to wait for k8s-apps to be running ...
	I1101 11:11:02.620805  564163 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:11:02.620889  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:11:02.642752  564163 system_svc.go:56] duration metric: took 21.936584ms WaitForService to wait for kubelet
	I1101 11:11:02.642777  564163 kubeadm.go:587] duration metric: took 43.759995961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:11:02.642797  564163 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:11:02.646454  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:11:02.646483  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:11:02.646495  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:11:02.646499  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:11:02.646505  564163 node_conditions.go:105] duration metric: took 3.701806ms to run NodePressure ...
	I1101 11:11:02.646517  564163 start.go:242] waiting for startup goroutines ...
	I1101 11:11:02.646543  564163 start.go:256] writing updated cluster config ...
	I1101 11:11:02.649940  564163 out.go:203] 
	I1101 11:11:02.652977  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:11:02.653122  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:11:02.656351  564163 out.go:179] * Starting "ha-472819-m03" control-plane node in "ha-472819" cluster
	I1101 11:11:02.659046  564163 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:11:02.661940  564163 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:11:02.664802  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:11:02.664836  564163 cache.go:59] Caching tarball of preloaded images
	I1101 11:11:02.664905  564163 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:11:02.664972  564163 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:11:02.664989  564163 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:11:02.665111  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:11:02.684654  564163 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:11:02.684679  564163 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:11:02.684692  564163 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:11:02.684714  564163 start.go:360] acquireMachinesLock for ha-472819-m03: {Name:mk3b84885ff8ece87965a525482df80362a95518 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:11:02.684837  564163 start.go:364] duration metric: took 95.632µs to acquireMachinesLock for "ha-472819-m03"
	I1101 11:11:02.684871  564163 start.go:93] Provisioning new machine with config: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:11:02.684979  564163 start.go:125] createHost starting for "m03" (driver="docker")
	I1101 11:11:02.688507  564163 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 11:11:02.688640  564163 start.go:159] libmachine.API.Create for "ha-472819" (driver="docker")
	I1101 11:11:02.688672  564163 client.go:173] LocalClient.Create starting
	I1101 11:11:02.688778  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 11:11:02.688818  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:11:02.688837  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:11:02.688891  564163 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 11:11:02.688915  564163 main.go:143] libmachine: Decoding PEM data...
	I1101 11:11:02.688930  564163 main.go:143] libmachine: Parsing certificate...
	I1101 11:11:02.689181  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:11:02.707661  564163 network_create.go:77] Found existing network {name:ha-472819 subnet:0x400141a090 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1101 11:11:02.707696  564163 kic.go:121] calculated static IP "192.168.49.4" for the "ha-472819-m03" container
	I1101 11:11:02.707771  564163 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 11:11:02.725657  564163 cli_runner.go:164] Run: docker volume create ha-472819-m03 --label name.minikube.sigs.k8s.io=ha-472819-m03 --label created_by.minikube.sigs.k8s.io=true
	I1101 11:11:02.747975  564163 oci.go:103] Successfully created a docker volume ha-472819-m03
	I1101 11:11:02.748068  564163 cli_runner.go:164] Run: docker run --rm --name ha-472819-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819-m03 --entrypoint /usr/bin/test -v ha-472819-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 11:11:03.319169  564163 oci.go:107] Successfully prepared a docker volume ha-472819-m03
	I1101 11:11:03.319219  564163 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:11:03.319239  564163 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 11:11:03.319307  564163 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 11:11:07.770669  564163 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-472819-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.451319736s)
	I1101 11:11:07.770702  564163 kic.go:203] duration metric: took 4.451458674s to extract preloaded images to volume ...
	W1101 11:11:07.770834  564163 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 11:11:07.770945  564163 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 11:11:07.830823  564163 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-472819-m03 --name ha-472819-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-472819-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-472819-m03 --network ha-472819 --ip 192.168.49.4 --volume ha-472819-m03:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 11:11:08.209425  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Running}}
	I1101 11:11:08.233583  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:11:08.259513  564163 cli_runner.go:164] Run: docker exec ha-472819-m03 stat /var/lib/dpkg/alternatives/iptables
	I1101 11:11:08.318997  564163 oci.go:144] the created container "ha-472819-m03" has a running status.
	I1101 11:11:08.319026  564163 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa...
	I1101 11:11:08.824570  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 11:11:08.824677  564163 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 11:11:08.851513  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:11:08.872714  564163 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 11:11:08.872734  564163 kic_runner.go:114] Args: [docker exec --privileged ha-472819-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 11:11:08.929245  564163 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:11:08.951783  564163 machine.go:94] provisionDockerMachine start ...
	I1101 11:11:08.951884  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:08.971759  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:08.972075  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1101 11:11:08.972084  564163 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:11:08.972780  564163 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 11:11:12.129991  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m03
	
	I1101 11:11:12.130016  564163 ubuntu.go:182] provisioning hostname "ha-472819-m03"
	I1101 11:11:12.130084  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:12.158484  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:12.158792  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1101 11:11:12.158807  564163 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-472819-m03 && echo "ha-472819-m03" | sudo tee /etc/hostname
	I1101 11:11:12.332540  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-472819-m03
	
	I1101 11:11:12.332619  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:12.351114  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:12.351425  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1101 11:11:12.351444  564163 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-472819-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-472819-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-472819-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:11:12.502378  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:11:12.502450  564163 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:11:12.502486  564163 ubuntu.go:190] setting up certificates
	I1101 11:11:12.502527  564163 provision.go:84] configureAuth start
	I1101 11:11:12.502632  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:11:12.519713  564163 provision.go:143] copyHostCerts
	I1101 11:11:12.519756  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:11:12.519789  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:11:12.519796  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:11:12.519876  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:11:12.519955  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:11:12.519972  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:11:12.519977  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:11:12.520002  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:11:12.520040  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:11:12.520056  564163 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:11:12.520060  564163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:11:12.520083  564163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:11:12.520129  564163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.ha-472819-m03 san=[127.0.0.1 192.168.49.4 ha-472819-m03 localhost minikube]
	I1101 11:11:13.612826  564163 provision.go:177] copyRemoteCerts
	I1101 11:11:13.612953  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:11:13.613033  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:13.637098  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:13.745943  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 11:11:13.746008  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:11:13.765430  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 11:11:13.765498  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 11:11:13.784115  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 11:11:13.784225  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:11:13.804599  564163 provision.go:87] duration metric: took 1.3020372s to configureAuth
	I1101 11:11:13.804630  564163 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:11:13.804902  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:11:13.805040  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:13.824430  564163 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:13.824739  564163 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1101 11:11:13.824766  564163 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:11:14.162205  564163 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:11:14.162231  564163 machine.go:97] duration metric: took 5.210427743s to provisionDockerMachine
	I1101 11:11:14.162240  564163 client.go:176] duration metric: took 11.473529387s to LocalClient.Create
	I1101 11:11:14.162254  564163 start.go:167] duration metric: took 11.473618142s to libmachine.API.Create "ha-472819"
	I1101 11:11:14.162261  564163 start.go:293] postStartSetup for "ha-472819-m03" (driver="docker")
	I1101 11:11:14.162271  564163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:11:14.162342  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:11:14.162391  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:14.180648  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:14.290174  564163 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:11:14.293569  564163 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:11:14.293597  564163 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:11:14.293610  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:11:14.293673  564163 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:11:14.293794  564163 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:11:14.293802  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /etc/ssl/certs/5347202.pem
	I1101 11:11:14.293910  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:11:14.301571  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:11:14.319949  564163 start.go:296] duration metric: took 157.672021ms for postStartSetup
	I1101 11:11:14.320309  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:11:14.339186  564163 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/config.json ...
	I1101 11:11:14.339488  564163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:11:14.339544  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:14.356118  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:14.462914  564163 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:11:14.467880  564163 start.go:128] duration metric: took 11.782886335s to createHost
	I1101 11:11:14.467906  564163 start.go:83] releasing machines lock for "ha-472819-m03", held for 11.783053073s
	I1101 11:11:14.467977  564163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:11:14.491429  564163 out.go:179] * Found network options:
	I1101 11:11:14.494078  564163 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1101 11:11:14.497086  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 11:11:14.497117  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 11:11:14.497140  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	W1101 11:11:14.497150  564163 proxy.go:120] fail to check proxy env: Error ip not in block
	I1101 11:11:14.497218  564163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:11:14.497268  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:14.497522  564163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:11:14.497567  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:11:14.527486  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:14.535318  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:11:14.691428  564163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:11:14.751657  564163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:11:14.751737  564163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:11:14.784282  564163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 11:11:14.784309  564163 start.go:496] detecting cgroup driver to use...
	I1101 11:11:14.784342  564163 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:11:14.784395  564163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:11:14.804100  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:11:14.817895  564163 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:11:14.817997  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:11:14.836631  564163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:11:14.858424  564163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:11:14.995659  564163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:11:15.154010  564163 docker.go:234] disabling docker service ...
	I1101 11:11:15.154132  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:11:15.178485  564163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:11:15.193239  564163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:11:15.327679  564163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:11:15.454029  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:11:15.467802  564163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:11:15.484429  564163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:11:15.484525  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.494973  564163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:11:15.495089  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.505092  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.515189  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.525494  564163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:11:15.535841  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.545915  564163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.567914  564163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:11:15.577406  564163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:11:15.585734  564163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:11:15.594497  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:11:15.717478  564163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:11:15.855286  564163 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:11:15.855416  564163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:11:15.860080  564163 start.go:564] Will wait 60s for crictl version
	I1101 11:11:15.860199  564163 ssh_runner.go:195] Run: which crictl
	I1101 11:11:15.864416  564163 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:11:15.901128  564163 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:11:15.901276  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:11:15.939191  564163 ssh_runner.go:195] Run: crio --version
	I1101 11:11:15.976924  564163 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:11:15.979768  564163 out.go:179]   - env NO_PROXY=192.168.49.2
	I1101 11:11:15.982623  564163 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1101 11:11:15.985569  564163 cli_runner.go:164] Run: docker network inspect ha-472819 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:11:16.003209  564163 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 11:11:16.009752  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:11:16.022020  564163 mustload.go:66] Loading cluster: ha-472819
	I1101 11:11:16.022297  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:11:16.022563  564163 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:11:16.041810  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:11:16.042214  564163 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819 for IP: 192.168.49.4
	I1101 11:11:16.042226  564163 certs.go:195] generating shared ca certs ...
	I1101 11:11:16.042242  564163 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:11:16.042364  564163 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:11:16.042403  564163 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:11:16.042420  564163 certs.go:257] generating profile certs ...
	I1101 11:11:16.042507  564163 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key
	I1101 11:11:16.042544  564163 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.b77bbb0d
	I1101 11:11:16.042559  564163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.b77bbb0d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1101 11:11:17.419467  564163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.b77bbb0d ...
	I1101 11:11:17.419504  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.b77bbb0d: {Name:mke3ca75daab1021e235325f0aa6ae3fdb3aebaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:11:17.419709  564163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.b77bbb0d ...
	I1101 11:11:17.419723  564163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.b77bbb0d: {Name:mk05b86323e75bb15d0b4b2c07a8199585004a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:11:17.419819  564163 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt.b77bbb0d -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt
	I1101 11:11:17.419955  564163 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key.b77bbb0d -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key
	I1101 11:11:17.420103  564163 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key
	I1101 11:11:17.420121  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 11:11:17.420136  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 11:11:17.420154  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 11:11:17.420170  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 11:11:17.420183  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 11:11:17.420205  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 11:11:17.420223  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 11:11:17.420239  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 11:11:17.420291  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:11:17.420324  564163 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:11:17.420336  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:11:17.420360  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:11:17.420385  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:11:17.420410  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:11:17.420457  564163 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:11:17.420490  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:11:17.420505  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem -> /usr/share/ca-certificates/534720.pem
	I1101 11:11:17.420518  564163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> /usr/share/ca-certificates/5347202.pem
	I1101 11:11:17.420579  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:11:17.446123  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:11:17.550099  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1101 11:11:17.554459  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1101 11:11:17.564628  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1101 11:11:17.568775  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1101 11:11:17.578449  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1101 11:11:17.582314  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1101 11:11:17.596987  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1101 11:11:17.602143  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1101 11:11:17.612550  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1101 11:11:17.616760  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1101 11:11:17.625280  564163 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1101 11:11:17.629323  564163 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1101 11:11:17.637956  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:11:17.670252  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:11:17.690944  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:11:17.712898  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:11:17.732957  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1101 11:11:17.754424  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:11:17.774366  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:11:17.794672  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:11:17.813962  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:11:17.832048  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:11:17.851761  564163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:11:17.870024  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1101 11:11:17.882634  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1101 11:11:17.899889  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1101 11:11:17.913845  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1101 11:11:17.938379  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1101 11:11:17.953783  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1101 11:11:17.969134  564163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1101 11:11:17.984029  564163 ssh_runner.go:195] Run: openssl version
	I1101 11:11:17.990588  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:11:17.998983  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:11:18.003748  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:11:18.003824  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:11:18.047711  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:11:18.056769  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:11:18.066764  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:11:18.071057  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:11:18.071127  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:11:18.115164  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:11:18.125731  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:11:18.136765  564163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:11:18.143282  564163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:11:18.143350  564163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:11:18.186179  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:11:18.195359  564163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:11:18.199380  564163 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:11:18.199489  564163 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1101 11:11:18.199610  564163 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-472819-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:11:18.199643  564163 kube-vip.go:115] generating kube-vip config ...
	I1101 11:11:18.199705  564163 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1101 11:11:18.212173  564163 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:11:18.212238  564163 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1101 11:11:18.212301  564163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:11:18.220267  564163 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:11:18.220347  564163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1101 11:11:18.228465  564163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 11:11:18.244016  564163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:11:18.258096  564163 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1101 11:11:18.272629  564163 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1101 11:11:18.276508  564163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:11:18.287804  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:11:18.413977  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:11:18.431959  564163 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:11:18.432256  564163 start.go:318] joinCluster: &{Name:ha-472819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-472819 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:11:18.432437  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1101 11:11:18.432483  564163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:11:18.451746  564163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:11:18.642595  564163 start.go:344] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:11:18.642681  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hw2jul.ax6v1umh51v4f6c5 --discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-472819-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I1101 11:11:43.319187  564163 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hw2jul.ax6v1umh51v4f6c5 --discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-472819-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (24.676482939s)
	I1101 11:11:43.319257  564163 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1101 11:11:43.769284  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-472819-m03 minikube.k8s.io/updated_at=2025_11_01T11_11_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=ha-472819 minikube.k8s.io/primary=false
	I1101 11:11:43.908814  564163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-472819-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1101 11:11:44.056105  564163 start.go:320] duration metric: took 25.623843323s to joinCluster
	I1101 11:11:44.056180  564163 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:11:44.056483  564163 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:11:44.059138  564163 out.go:179] * Verifying Kubernetes components...
	I1101 11:11:44.062045  564163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:11:44.226994  564163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:11:44.241935  564163 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1101 11:11:44.242072  564163 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1101 11:11:44.243546  564163 node_ready.go:35] waiting up to 6m0s for node "ha-472819-m03" to be "Ready" ...
	W1101 11:11:46.248078  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:48.747123  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:50.747643  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:52.747859  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:55.248106  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:57.248199  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:11:59.747904  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:02.247750  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:04.747562  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:07.246739  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:09.246975  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:11.248196  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:13.747822  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:16.249127  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:18.747882  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:20.749775  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:23.247174  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	W1101 11:12:25.248192  564163 node_ready.go:57] node "ha-472819-m03" has "Ready":"False" status (will retry)
	I1101 11:12:26.249644  564163 node_ready.go:49] node "ha-472819-m03" is "Ready"
	I1101 11:12:26.249669  564163 node_ready.go:38] duration metric: took 42.006098949s for node "ha-472819-m03" to be "Ready" ...
	I1101 11:12:26.249682  564163 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:12:26.249800  564163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:12:26.262481  564163 api_server.go:72] duration metric: took 42.206265905s to wait for apiserver process to appear ...
	I1101 11:12:26.262504  564163 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:12:26.262523  564163 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 11:12:26.271275  564163 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 11:12:26.272261  564163 api_server.go:141] control plane version: v1.34.1
	I1101 11:12:26.272284  564163 api_server.go:131] duration metric: took 9.773431ms to wait for apiserver health ...
	I1101 11:12:26.272294  564163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:12:26.280500  564163 system_pods.go:59] 24 kube-system pods found
	I1101 11:12:26.280537  564163 system_pods.go:61] "coredns-66bc5c9577-bntfw" [17503733-2ab6-460c-aa3f-21d031c70abd] Running
	I1101 11:12:26.280544  564163 system_pods.go:61] "coredns-66bc5c9577-n2tp2" [4b6711b0-f71a-421e-922d-eb44266c95a4] Running
	I1101 11:12:26.280549  564163 system_pods.go:61] "etcd-ha-472819" [6807b695-9ca8-4691-8aac-87ff5cdaca11] Running
	I1101 11:12:26.280553  564163 system_pods.go:61] "etcd-ha-472819-m02" [3cef3cc2-cf4e-4445-a55c-ce64fd2279ff] Running
	I1101 11:12:26.280558  564163 system_pods.go:61] "etcd-ha-472819-m03" [80e840dc-9437-4351-967c-2a400d35dc89] Running
	I1101 11:12:26.280563  564163 system_pods.go:61] "kindnet-cw2kt" [70effae0-c034-4a35-b3d9-3e092c079100] Running
	I1101 11:12:26.280567  564163 system_pods.go:61] "kindnet-dkhrw" [abb3d05e-e447-4fe5-8996-26e79d7e2b4d] Running
	I1101 11:12:26.280572  564163 system_pods.go:61] "kindnet-mz6bw" [217b3b0a-0680-4a26-98ee-04dd92e1b732] Running
	I1101 11:12:26.280576  564163 system_pods.go:61] "kube-apiserver-ha-472819" [a65e9eca-1f17-4ff9-b4d0-2b26612bc846] Running
	I1101 11:12:26.280580  564163 system_pods.go:61] "kube-apiserver-ha-472819-m02" [c94a478e-4714-4590-8c91-17468898125c] Running
	I1101 11:12:26.280585  564163 system_pods.go:61] "kube-apiserver-ha-472819-m03" [4dd6c2e8-c1fd-4a41-b208-b227db99ef54] Running
	I1101 11:12:26.280595  564163 system_pods.go:61] "kube-controller-manager-ha-472819" [e6236069-2227-4783-b8e3-6df90e52e82c] Running
	I1101 11:12:26.280600  564163 system_pods.go:61] "kube-controller-manager-ha-472819-m02" [f5e22b4d-d7c1-47b0-a044-4007e77d6ebc] Running
	I1101 11:12:26.280607  564163 system_pods.go:61] "kube-controller-manager-ha-472819-m03" [a67b5941-388f-48a8-b452-ff50be57ca66] Running
	I1101 11:12:26.280613  564163 system_pods.go:61] "kube-proxy-47prj" [16f8f4f3-8267-4ce3-997b-1f4afb0f5104] Running
	I1101 11:12:26.280624  564163 system_pods.go:61] "kube-proxy-djfvb" [2c010b85-48bd-4004-886f-fbe4e03884a9] Running
	I1101 11:12:26.280628  564163 system_pods.go:61] "kube-proxy-gc4g4" [2289bf2a-0371-4bad-8440-6e299ce1e8a9] Running
	I1101 11:12:26.280632  564163 system_pods.go:61] "kube-scheduler-ha-472819" [78ac9fa6-2686-404f-a977-d7710745150b] Running
	I1101 11:12:26.280644  564163 system_pods.go:61] "kube-scheduler-ha-472819-m02" [31b58b00-ca07-42ad-a9a7-20da16f0a251] Running
	I1101 11:12:26.280648  564163 system_pods.go:61] "kube-scheduler-ha-472819-m03" [2b72cc38-a219-4fcc-8a1e-977391aee0b1] Running
	I1101 11:12:26.280652  564163 system_pods.go:61] "kube-vip-ha-472819" [0e1f82b1-9039-49f8-b83f-8c40ab9ec44f] Running
	I1101 11:12:26.280657  564163 system_pods.go:61] "kube-vip-ha-472819-m02" [8964dc5d-7184-43bf-a1bd-0f9b261bb9df] Running
	I1101 11:12:26.280666  564163 system_pods.go:61] "kube-vip-ha-472819-m03" [a3e5599c-b0a5-4792-9192-397f763006fc] Running
	I1101 11:12:26.280671  564163 system_pods.go:61] "storage-provisioner" [18119b45-4932-4521-b0e9-e3a73bc6d3b1] Running
	I1101 11:12:26.280676  564163 system_pods.go:74] duration metric: took 8.377598ms to wait for pod list to return data ...
	I1101 11:12:26.280686  564163 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:12:26.284133  564163 default_sa.go:45] found service account: "default"
	I1101 11:12:26.284159  564163 default_sa.go:55] duration metric: took 3.464322ms for default service account to be created ...
	I1101 11:12:26.284168  564163 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:12:26.289312  564163 system_pods.go:86] 24 kube-system pods found
	I1101 11:12:26.289355  564163 system_pods.go:89] "coredns-66bc5c9577-bntfw" [17503733-2ab6-460c-aa3f-21d031c70abd] Running
	I1101 11:12:26.289363  564163 system_pods.go:89] "coredns-66bc5c9577-n2tp2" [4b6711b0-f71a-421e-922d-eb44266c95a4] Running
	I1101 11:12:26.289367  564163 system_pods.go:89] "etcd-ha-472819" [6807b695-9ca8-4691-8aac-87ff5cdaca11] Running
	I1101 11:12:26.289372  564163 system_pods.go:89] "etcd-ha-472819-m02" [3cef3cc2-cf4e-4445-a55c-ce64fd2279ff] Running
	I1101 11:12:26.289378  564163 system_pods.go:89] "etcd-ha-472819-m03" [80e840dc-9437-4351-967c-2a400d35dc89] Running
	I1101 11:12:26.289397  564163 system_pods.go:89] "kindnet-cw2kt" [70effae0-c034-4a35-b3d9-3e092c079100] Running
	I1101 11:12:26.289411  564163 system_pods.go:89] "kindnet-dkhrw" [abb3d05e-e447-4fe5-8996-26e79d7e2b4d] Running
	I1101 11:12:26.289416  564163 system_pods.go:89] "kindnet-mz6bw" [217b3b0a-0680-4a26-98ee-04dd92e1b732] Running
	I1101 11:12:26.289421  564163 system_pods.go:89] "kube-apiserver-ha-472819" [a65e9eca-1f17-4ff9-b4d0-2b26612bc846] Running
	I1101 11:12:26.289429  564163 system_pods.go:89] "kube-apiserver-ha-472819-m02" [c94a478e-4714-4590-8c91-17468898125c] Running
	I1101 11:12:26.289433  564163 system_pods.go:89] "kube-apiserver-ha-472819-m03" [4dd6c2e8-c1fd-4a41-b208-b227db99ef54] Running
	I1101 11:12:26.289438  564163 system_pods.go:89] "kube-controller-manager-ha-472819" [e6236069-2227-4783-b8e3-6df90e52e82c] Running
	I1101 11:12:26.289442  564163 system_pods.go:89] "kube-controller-manager-ha-472819-m02" [f5e22b4d-d7c1-47b0-a044-4007e77d6ebc] Running
	I1101 11:12:26.289454  564163 system_pods.go:89] "kube-controller-manager-ha-472819-m03" [a67b5941-388f-48a8-b452-ff50be57ca66] Running
	I1101 11:12:26.289458  564163 system_pods.go:89] "kube-proxy-47prj" [16f8f4f3-8267-4ce3-997b-1f4afb0f5104] Running
	I1101 11:12:26.289461  564163 system_pods.go:89] "kube-proxy-djfvb" [2c010b85-48bd-4004-886f-fbe4e03884a9] Running
	I1101 11:12:26.289467  564163 system_pods.go:89] "kube-proxy-gc4g4" [2289bf2a-0371-4bad-8440-6e299ce1e8a9] Running
	I1101 11:12:26.289471  564163 system_pods.go:89] "kube-scheduler-ha-472819" [78ac9fa6-2686-404f-a977-d7710745150b] Running
	I1101 11:12:26.289475  564163 system_pods.go:89] "kube-scheduler-ha-472819-m02" [31b58b00-ca07-42ad-a9a7-20da16f0a251] Running
	I1101 11:12:26.289479  564163 system_pods.go:89] "kube-scheduler-ha-472819-m03" [2b72cc38-a219-4fcc-8a1e-977391aee0b1] Running
	I1101 11:12:26.289483  564163 system_pods.go:89] "kube-vip-ha-472819" [0e1f82b1-9039-49f8-b83f-8c40ab9ec44f] Running
	I1101 11:12:26.289491  564163 system_pods.go:89] "kube-vip-ha-472819-m02" [8964dc5d-7184-43bf-a1bd-0f9b261bb9df] Running
	I1101 11:12:26.289495  564163 system_pods.go:89] "kube-vip-ha-472819-m03" [a3e5599c-b0a5-4792-9192-397f763006fc] Running
	I1101 11:12:26.289499  564163 system_pods.go:89] "storage-provisioner" [18119b45-4932-4521-b0e9-e3a73bc6d3b1] Running
	I1101 11:12:26.289507  564163 system_pods.go:126] duration metric: took 5.334417ms to wait for k8s-apps to be running ...
	I1101 11:12:26.289518  564163 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:12:26.289578  564163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:12:26.305948  564163 system_svc.go:56] duration metric: took 16.419873ms WaitForService to wait for kubelet
	I1101 11:12:26.305975  564163 kubeadm.go:587] duration metric: took 42.249764697s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:12:26.305994  564163 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:12:26.309253  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:12:26.309279  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:12:26.309289  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:12:26.309294  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:12:26.309298  564163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:12:26.309303  564163 node_conditions.go:123] node cpu capacity is 2
	I1101 11:12:26.309308  564163 node_conditions.go:105] duration metric: took 3.308316ms to run NodePressure ...
	I1101 11:12:26.309331  564163 start.go:242] waiting for startup goroutines ...
	I1101 11:12:26.309355  564163 start.go:256] writing updated cluster config ...
	I1101 11:12:26.309669  564163 ssh_runner.go:195] Run: rm -f paused
	I1101 11:12:26.313122  564163 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:12:26.313679  564163 kapi.go:59] client config for ha-472819: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/ha-472819/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:12:26.330719  564163 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bntfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.336573  564163 pod_ready.go:94] pod "coredns-66bc5c9577-bntfw" is "Ready"
	I1101 11:12:26.336600  564163 pod_ready.go:86] duration metric: took 5.853087ms for pod "coredns-66bc5c9577-bntfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.336611  564163 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n2tp2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.343901  564163 pod_ready.go:94] pod "coredns-66bc5c9577-n2tp2" is "Ready"
	I1101 11:12:26.343929  564163 pod_ready.go:86] duration metric: took 7.293605ms for pod "coredns-66bc5c9577-n2tp2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.346848  564163 pod_ready.go:83] waiting for pod "etcd-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.352072  564163 pod_ready.go:94] pod "etcd-ha-472819" is "Ready"
	I1101 11:12:26.352102  564163 pod_ready.go:86] duration metric: took 5.227692ms for pod "etcd-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.352112  564163 pod_ready.go:83] waiting for pod "etcd-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.358541  564163 pod_ready.go:94] pod "etcd-ha-472819-m02" is "Ready"
	I1101 11:12:26.358573  564163 pod_ready.go:86] duration metric: took 6.453734ms for pod "etcd-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.358588  564163 pod_ready.go:83] waiting for pod "etcd-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.515018  564163 request.go:683] "Waited before sending request" delay="156.271647ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-472819-m03"
	I1101 11:12:26.714759  564163 request.go:683] "Waited before sending request" delay="196.328427ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:26.718151  564163 pod_ready.go:94] pod "etcd-ha-472819-m03" is "Ready"
	I1101 11:12:26.718181  564163 pod_ready.go:86] duration metric: took 359.546189ms for pod "etcd-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:26.914587  564163 request.go:683] "Waited before sending request" delay="196.293202ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1101 11:12:26.918462  564163 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.115021  564163 request.go:683] "Waited before sending request" delay="196.450874ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472819"
	I1101 11:12:27.314790  564163 request.go:683] "Waited before sending request" delay="196.347915ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819"
	I1101 11:12:27.318165  564163 pod_ready.go:94] pod "kube-apiserver-ha-472819" is "Ready"
	I1101 11:12:27.318193  564163 pod_ready.go:86] duration metric: took 399.695688ms for pod "kube-apiserver-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.318203  564163 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.514681  564163 request.go:683] "Waited before sending request" delay="196.361741ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472819-m02"
	I1101 11:12:27.714676  564163 request.go:683] "Waited before sending request" delay="196.346856ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m02"
	I1101 11:12:27.718653  564163 pod_ready.go:94] pod "kube-apiserver-ha-472819-m02" is "Ready"
	I1101 11:12:27.718733  564163 pod_ready.go:86] duration metric: took 400.522554ms for pod "kube-apiserver-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.718769  564163 pod_ready.go:83] waiting for pod "kube-apiserver-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:27.915203  564163 request.go:683] "Waited before sending request" delay="196.340399ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-472819-m03"
	I1101 11:12:28.114358  564163 request.go:683] "Waited before sending request" delay="195.249259ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:28.117775  564163 pod_ready.go:94] pod "kube-apiserver-ha-472819-m03" is "Ready"
	I1101 11:12:28.117807  564163 pod_ready.go:86] duration metric: took 399.013365ms for pod "kube-apiserver-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:28.314349  564163 request.go:683] "Waited before sending request" delay="196.417856ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1101 11:12:28.318465  564163 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:28.514898  564163 request.go:683] "Waited before sending request" delay="196.319665ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472819"
	I1101 11:12:28.714748  564163 request.go:683] "Waited before sending request" delay="196.35411ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819"
	I1101 11:12:28.718210  564163 pod_ready.go:94] pod "kube-controller-manager-ha-472819" is "Ready"
	I1101 11:12:28.718240  564163 pod_ready.go:86] duration metric: took 399.743024ms for pod "kube-controller-manager-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:28.718250  564163 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:28.914684  564163 request.go:683] "Waited before sending request" delay="196.336731ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472819-m02"
	I1101 11:12:29.114836  564163 request.go:683] "Waited before sending request" delay="196.362029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m02"
	I1101 11:12:29.118262  564163 pod_ready.go:94] pod "kube-controller-manager-ha-472819-m02" is "Ready"
	I1101 11:12:29.118307  564163 pod_ready.go:86] duration metric: took 400.051245ms for pod "kube-controller-manager-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:29.118318  564163 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:29.314606  564163 request.go:683] "Waited before sending request" delay="196.212242ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-472819-m03"
	I1101 11:12:29.514310  564163 request.go:683] "Waited before sending request" delay="196.164808ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:29.519832  564163 pod_ready.go:94] pod "kube-controller-manager-ha-472819-m03" is "Ready"
	I1101 11:12:29.519865  564163 pod_ready.go:86] duration metric: took 401.539002ms for pod "kube-controller-manager-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:29.715277  564163 request.go:683] "Waited before sending request" delay="195.313572ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1101 11:12:29.719110  564163 pod_ready.go:83] waiting for pod "kube-proxy-47prj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:29.914444  564163 request.go:683] "Waited before sending request" delay="195.229952ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47prj"
	I1101 11:12:30.115044  564163 request.go:683] "Waited before sending request" delay="197.189419ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m02"
	I1101 11:12:30.122807  564163 pod_ready.go:94] pod "kube-proxy-47prj" is "Ready"
	I1101 11:12:30.122901  564163 pod_ready.go:86] duration metric: took 403.759526ms for pod "kube-proxy-47prj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:30.122920  564163 pod_ready.go:83] waiting for pod "kube-proxy-djfvb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:30.314268  564163 request.go:683] "Waited before sending request" delay="191.266408ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djfvb"
	I1101 11:12:30.515306  564163 request.go:683] "Waited before sending request" delay="197.533768ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819"
	I1101 11:12:30.518543  564163 pod_ready.go:94] pod "kube-proxy-djfvb" is "Ready"
	I1101 11:12:30.518576  564163 pod_ready.go:86] duration metric: took 395.647433ms for pod "kube-proxy-djfvb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:30.518587  564163 pod_ready.go:83] waiting for pod "kube-proxy-gc4g4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:30.715050  564163 request.go:683] "Waited before sending request" delay="196.355957ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gc4g4"
	I1101 11:12:30.915104  564163 request.go:683] "Waited before sending request" delay="194.316785ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:30.920032  564163 pod_ready.go:94] pod "kube-proxy-gc4g4" is "Ready"
	I1101 11:12:30.920064  564163 pod_ready.go:86] duration metric: took 401.469274ms for pod "kube-proxy-gc4g4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.114457  564163 request.go:683] "Waited before sending request" delay="194.275438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1101 11:12:31.118943  564163 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.314321  564163 request.go:683] "Waited before sending request" delay="195.278536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-472819"
	I1101 11:12:31.515230  564163 request.go:683] "Waited before sending request" delay="197.303661ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819"
	I1101 11:12:31.518454  564163 pod_ready.go:94] pod "kube-scheduler-ha-472819" is "Ready"
	I1101 11:12:31.518487  564163 pod_ready.go:86] duration metric: took 399.509488ms for pod "kube-scheduler-ha-472819" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.518498  564163 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.714950  564163 request.go:683] "Waited before sending request" delay="196.3489ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-472819-m02"
	I1101 11:12:31.914922  564163 request.go:683] "Waited before sending request" delay="196.326566ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m02"
	I1101 11:12:31.918119  564163 pod_ready.go:94] pod "kube-scheduler-ha-472819-m02" is "Ready"
	I1101 11:12:31.918150  564163 pod_ready.go:86] duration metric: took 399.645153ms for pod "kube-scheduler-ha-472819-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:31.918159  564163 pod_ready.go:83] waiting for pod "kube-scheduler-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:32.114581  564163 request.go:683] "Waited before sending request" delay="196.334475ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-472819-m03"
	I1101 11:12:32.315116  564163 request.go:683] "Waited before sending request" delay="196.314915ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-472819-m03"
	I1101 11:12:32.318291  564163 pod_ready.go:94] pod "kube-scheduler-ha-472819-m03" is "Ready"
	I1101 11:12:32.318319  564163 pod_ready.go:86] duration metric: took 400.143654ms for pod "kube-scheduler-ha-472819-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:12:32.318333  564163 pod_ready.go:40] duration metric: took 6.005166383s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:12:32.374771  564163 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 11:12:32.379957  564163 out.go:179] * Done! kubectl is now configured to use "ha-472819" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:10:29 ha-472819 crio[835]: time="2025-11-01T11:10:29.724673509Z" level=info msg="Created container b91918178a88a5685429c28d6c36fba100356470fd0f83517aa7e116b189eb4a: kube-system/coredns-66bc5c9577-n2tp2/coredns" id=4a2a84c1-22ae-4f46-8149-9f44dec0e1df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:10:29 ha-472819 crio[835]: time="2025-11-01T11:10:29.726091618Z" level=info msg="Starting container: b91918178a88a5685429c28d6c36fba100356470fd0f83517aa7e116b189eb4a" id=182c8112-171d-4979-810a-ec966f9ee5bd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:10:29 ha-472819 crio[835]: time="2025-11-01T11:10:29.728389345Z" level=info msg="Started container" PID=1830 containerID=b91918178a88a5685429c28d6c36fba100356470fd0f83517aa7e116b189eb4a description=kube-system/coredns-66bc5c9577-n2tp2/coredns id=182c8112-171d-4979-810a-ec966f9ee5bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c45f2568b0e8e33cb1da636920d9b841b29c754a967265ee7a2ff1803ba718d
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.120480779Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-lm6r8/POD" id=ae159a9e-0cdc-4387-8101-f3339714c067 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.120560313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.151990285Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-lm6r8 Namespace:default ID:1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6 UID:3faf7e64-22cf-4338-92ef-39a2978dacb5 NetNS:/var/run/netns/273515f3-fe48-4c9a-a5a0-ca5b0e3ab433 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d90530}] Aliases:map[]}"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.152188171Z" level=info msg="Adding pod default_busybox-7b57f96db7-lm6r8 to CNI network \"kindnet\" (type=ptp)"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.174049137Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-lm6r8 Namespace:default ID:1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6 UID:3faf7e64-22cf-4338-92ef-39a2978dacb5 NetNS:/var/run/netns/273515f3-fe48-4c9a-a5a0-ca5b0e3ab433 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d90530}] Aliases:map[]}"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.174388982Z" level=info msg="Checking pod default_busybox-7b57f96db7-lm6r8 for CNI network kindnet (type=ptp)"
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.18392826Z" level=info msg="Ran pod sandbox 1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6 with infra container: default/busybox-7b57f96db7-lm6r8/POD" id=ae159a9e-0cdc-4387-8101-f3339714c067 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.18737883Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=f0278987-4284-4ba6-99d3-3e1b3bcbe42b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.187693714Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=f0278987-4284-4ba6-99d3-3e1b3bcbe42b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.187803098Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28 found" id=f0278987-4284-4ba6-99d3-3e1b3bcbe42b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.189366827Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=8af5159f-b7ff-433f-9810-f5aaf54d8516 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:12:34 ha-472819 crio[835]: time="2025-11-01T11:12:34.19276737Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.210173626Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=8af5159f-b7ff-433f-9810-f5aaf54d8516 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.211378047Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=62e5c4c6-2347-4768-b7b2-e7d361a90c66 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.213299064Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=9d165b72-72dd-44dd-baee-e3b9079ec16f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.224587271Z" level=info msg="Creating container: default/busybox-7b57f96db7-lm6r8/busybox" id=f3cbc5a1-6aaf-468c-ad23-e310e4e6c169 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.224848058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.249270156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.250049973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.282130596Z" level=info msg="Created container dff6a4a869cee8df9dc4d3d269f3081a5f7b6994fbe3813528d07d7a06f03fb6: default/busybox-7b57f96db7-lm6r8/busybox" id=f3cbc5a1-6aaf-468c-ad23-e310e4e6c169 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.288280744Z" level=info msg="Starting container: dff6a4a869cee8df9dc4d3d269f3081a5f7b6994fbe3813528d07d7a06f03fb6" id=0733d7c4-1f36-4706-b6d2-98a90d511fb9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:12:36 ha-472819 crio[835]: time="2025-11-01T11:12:36.300369166Z" level=info msg="Started container" PID=1986 containerID=dff6a4a869cee8df9dc4d3d269f3081a5f7b6994fbe3813528d07d7a06f03fb6 description=default/busybox-7b57f96db7-lm6r8/busybox id=0733d7c4-1f36-4706-b6d2-98a90d511fb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	dff6a4a869cee       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   10 minutes ago      Running             busybox                   0                   1d1abc560619e       busybox-7b57f96db7-lm6r8            default
	b91918178a88a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 minutes ago      Running             coredns                   0                   2c45f2568b0e8       coredns-66bc5c9577-n2tp2            kube-system
	c8ab7117746d2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 minutes ago      Running             coredns                   0                   f161ed77d0204       coredns-66bc5c9577-bntfw            kube-system
	f3816faa8e434       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 minutes ago      Running             storage-provisioner       0                   48ac5b7666614       storage-provisioner                 kube-system
	7078104c50ff2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      12 minutes ago      Running             kube-proxy                0                   cbb2812743bd4       kube-proxy-djfvb                    kube-system
	6af4febe46d8a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      12 minutes ago      Running             kindnet-cni               0                   45d9b924aafb5       kindnet-dkhrw                       kube-system
	58f10619def7f       ghcr.io/kube-vip/kube-vip@sha256:a9c131fb1bd4690cd4563761c2f545eb89b92cc8ea19aec96c833d1b4b0211eb     13 minutes ago      Running             kube-vip                  0                   1eb623b05a53f       kube-vip-ha-472819                  kube-system
	91af80c077c55       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      13 minutes ago      Running             kube-apiserver            0                   86e0901f54771       kube-apiserver-ha-472819            kube-system
	f940f08b4a7e5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      13 minutes ago      Running             kube-scheduler            0                   956e3189233cf       kube-scheduler-ha-472819            kube-system
	640585dbb86b9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      13 minutes ago      Running             etcd                      0                   b159389b39c8d       etcd-ha-472819                      kube-system
	6bf6ea4411cda       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      13 minutes ago      Running             kube-controller-manager   0                   feba7cf49ce4a       kube-controller-manager-ha-472819   kube-system
	
	
	==> coredns [b91918178a88a5685429c28d6c36fba100356470fd0f83517aa7e116b189eb4a] <==
	[INFO] 10.244.2.2:44007 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.115679076s
	[INFO] 10.244.0.4:48262 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000087016s
	[INFO] 10.244.1.2:44384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134082s
	[INFO] 10.244.1.2:56915 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.0000838s
	[INFO] 10.244.1.2:41130 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.00008604s
	[INFO] 10.244.2.2:49290 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001659803s
	[INFO] 10.244.2.2:38229 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154456s
	[INFO] 10.244.0.4:54270 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004350553s
	[INFO] 10.244.0.4:50580 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000195005s
	[INFO] 10.244.1.2:42868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013002s
	[INFO] 10.244.1.2:36862 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001492458s
	[INFO] 10.244.1.2:48136 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177652s
	[INFO] 10.244.1.2:50876 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135198s
	[INFO] 10.244.1.2:44573 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001077618s
	[INFO] 10.244.1.2:38478 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152281s
	[INFO] 10.244.2.2:52114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135033s
	[INFO] 10.244.2.2:49246 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184634s
	[INFO] 10.244.2.2:58049 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00018548s
	[INFO] 10.244.2.2:35795 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086098s
	[INFO] 10.244.0.4:60969 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090381s
	[INFO] 10.244.1.2:54184 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173106s
	[INFO] 10.244.1.2:37354 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067488s
	[INFO] 10.244.2.2:38119 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161463s
	[INFO] 10.244.2.2:47922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124909s
	[INFO] 10.244.0.4:45686 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119181s
	
	
	==> coredns [c8ab7117746d22a14339221aee8d8b6add959c38472cacd236bfc7b815920794] <==
	[INFO] 10.244.2.2:46792 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103722s
	[INFO] 10.244.2.2:60032 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116769s
	[INFO] 10.244.2.2:32816 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157507s
	[INFO] 10.244.0.4:34068 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119369s
	[INFO] 10.244.0.4:37836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003113598s
	[INFO] 10.244.0.4:33426 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00023439s
	[INFO] 10.244.0.4:48388 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131604s
	[INFO] 10.244.0.4:39230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00017862s
	[INFO] 10.244.0.4:50943 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083883s
	[INFO] 10.244.1.2:46996 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014355s
	[INFO] 10.244.1.2:58261 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099439s
	[INFO] 10.244.0.4:58425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010762s
	[INFO] 10.244.0.4:55611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159009s
	[INFO] 10.244.0.4:48378 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115505s
	[INFO] 10.244.1.2:38348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116711s
	[INFO] 10.244.1.2:59106 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220499s
	[INFO] 10.244.2.2:37813 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000152002s
	[INFO] 10.244.2.2:56106 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019215s
	[INFO] 10.244.0.4:41265 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094089s
	[INFO] 10.244.0.4:47425 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059792s
	[INFO] 10.244.0.4:33602 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068554s
	[INFO] 10.244.1.2:35104 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143871s
	[INFO] 10.244.1.2:59782 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081625s
	[INFO] 10.244.1.2:39995 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080247s
	[INFO] 10.244.1.2:46827 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061088s
	
	
	==> describe nodes <==
	Name:               ha-472819
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-472819
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=ha-472819
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_09_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:09:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472819
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:22:37 +0000   Sat, 01 Nov 2025 11:09:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:22:37 +0000   Sat, 01 Nov 2025 11:09:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:22:37 +0000   Sat, 01 Nov 2025 11:09:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:22:37 +0000   Sat, 01 Nov 2025 11:10:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-472819
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                60304d9d-d149-4b0e-8acf-98dc18a25376
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lm6r8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-bntfw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-n2tp2             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-472819                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-dkhrw                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-472819             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-472819    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-djfvb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-472819             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-472819                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 13m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m   kubelet          Node ha-472819 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m   kubelet          Node ha-472819 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m   kubelet          Node ha-472819 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m   node-controller  Node ha-472819 event: Registered Node ha-472819 in Controller
	  Normal   RegisteredNode           12m   node-controller  Node ha-472819 event: Registered Node ha-472819 in Controller
	  Normal   NodeReady                12m   kubelet          Node ha-472819 status is now: NodeReady
	  Normal   RegisteredNode           11m   node-controller  Node ha-472819 event: Registered Node ha-472819 in Controller
	
	
	Name:               ha-472819-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-472819-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=ha-472819
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_01T11_10_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:10:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472819-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:14:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 01 Nov 2025 11:12:41 +0000   Sat, 01 Nov 2025 11:14:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 01 Nov 2025 11:12:41 +0000   Sat, 01 Nov 2025 11:14:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 01 Nov 2025 11:12:41 +0000   Sat, 01 Nov 2025 11:14:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 01 Nov 2025 11:12:41 +0000   Sat, 01 Nov 2025 11:14:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-472819-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c598c781-8aa3-4c9a-acbe-21bfb38aa260
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-x679v                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-472819-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-cw2kt                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-472819-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-472819-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-47prj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-472819-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-472819-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        12m    kube-proxy       
	  Normal  RegisteredNode  12m    node-controller  Node ha-472819-m02 event: Registered Node ha-472819-m02 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-472819-m02 event: Registered Node ha-472819-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-472819-m02 event: Registered Node ha-472819-m02 in Controller
	  Normal  NodeNotReady    7m51s  node-controller  Node ha-472819-m02 status is now: NodeNotReady
	
	
	Name:               ha-472819-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-472819-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=ha-472819
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_01T11_11_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:11:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472819-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:22:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:18:01 +0000   Sat, 01 Nov 2025 11:11:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:18:01 +0000   Sat, 01 Nov 2025 11:11:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:18:01 +0000   Sat, 01 Nov 2025 11:11:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:18:01 +0000   Sat, 01 Nov 2025 11:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-472819-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                dab78d92-59b1-457d-81c0-7efcc6e5bf35
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-7m8cp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-472819-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-mz6bw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-472819-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-472819-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-gc4g4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-472819-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-472819-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                Age   From             Message
	  ----    ------                ----  ----             -------
	  Normal  Starting              11m   kube-proxy       
	  Normal  CIDRAssignmentFailed  11m   cidrAllocator    Node ha-472819-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode        11m   node-controller  Node ha-472819-m03 event: Registered Node ha-472819-m03 in Controller
	  Normal  RegisteredNode        11m   node-controller  Node ha-472819-m03 event: Registered Node ha-472819-m03 in Controller
	  Normal  RegisteredNode        11m   node-controller  Node ha-472819-m03 event: Registered Node ha-472819-m03 in Controller
	
	
	Name:               ha-472819-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-472819-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=ha-472819
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_01T11_13_00_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:12:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-472819-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:22:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:20:48 +0000   Sat, 01 Nov 2025 11:12:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:20:48 +0000   Sat, 01 Nov 2025 11:12:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:20:48 +0000   Sat, 01 Nov 2025 11:12:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:20:48 +0000   Sat, 01 Nov 2025 11:13:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-472819-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e495f925-4ef2-41a0-86db-65c0daddf116
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-x67zv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kindnet-88sf2               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m48s
	  kube-system                 kube-proxy-79nw9            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     9m48s                  cidrAllocator    Node ha-472819-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     9m48s                  cidrAllocator    Node ha-472819-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  9m48s (x3 over 9m48s)  kubelet          Node ha-472819-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m48s (x3 over 9m48s)  kubelet          Node ha-472819-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m48s (x3 over 9m48s)  kubelet          Node ha-472819-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m46s                  node-controller  Node ha-472819-m04 event: Registered Node ha-472819-m04 in Controller
	  Normal  RegisteredNode           9m46s                  node-controller  Node ha-472819-m04 event: Registered Node ha-472819-m04 in Controller
	  Normal  RegisteredNode           9m46s                  node-controller  Node ha-472819-m04 event: Registered Node ha-472819-m04 in Controller
	  Normal  NodeReady                9m5s                   kubelet          Node ha-472819-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 10:33] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:34] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:36] overlayfs: idmapped layers are currently not supported
	[ +20.644099] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:38] overlayfs: idmapped layers are currently not supported
	[ +26.122524] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:39] overlayfs: idmapped layers are currently not supported
	[  +9.289237] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:40] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:41] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:43] overlayfs: idmapped layers are currently not supported
	[ +12.370416] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:47] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 10:49] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:55] overlayfs: idmapped layers are currently not supported
	[Nov 1 10:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:09] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [640585dbb86b99532c1a5c54e4cb7548846d3ee044b85ae39e75a467ff5a3081] <==
	{"level":"warn","ts":"2025-11-01T11:22:20.328463Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:21.723300Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:21.723316Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:24.330041Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:24.330107Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:26.724057Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:26.724046Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:28.331190Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:28.331378Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:31.725003Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:31.725022Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:32.332651Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:32.332707Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:36.334390Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:36.334512Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:36.725833Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:36.725839Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:40.335613Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:40.335670Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:41.726016Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:41.726102Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:44.337230Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:44.337315Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99ad86fd494346b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:46.726611Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99ad86fd494346b","rtt":"31.357601ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-01T11:22:46.726745Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99ad86fd494346b","rtt":"44.334372ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 11:22:47 up  3:05,  0 user,  load average: 0.99, 0.97, 1.37
	Linux ha-472819 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6af4febe46d8a121ee2c8a9dbe81d96bb1173a205d2aadbbf0c7fd9d38d70f1b] <==
	I1101 11:22:08.933679       1 main.go:324] Node ha-472819-m02 has CIDR [10.244.1.0/24] 
	I1101 11:22:18.941992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:22:18.942027       1 main.go:301] handling current node
	I1101 11:22:18.942044       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1101 11:22:18.942051       1 main.go:324] Node ha-472819-m02 has CIDR [10.244.1.0/24] 
	I1101 11:22:18.942222       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1101 11:22:18.942231       1 main.go:324] Node ha-472819-m03 has CIDR [10.244.2.0/24] 
	I1101 11:22:18.942293       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1101 11:22:18.942304       1 main.go:324] Node ha-472819-m04 has CIDR [10.244.4.0/24] 
	I1101 11:22:28.940035       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1101 11:22:28.940091       1 main.go:324] Node ha-472819-m04 has CIDR [10.244.4.0/24] 
	I1101 11:22:28.940300       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:22:28.940311       1 main.go:301] handling current node
	I1101 11:22:28.940352       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1101 11:22:28.940360       1 main.go:324] Node ha-472819-m02 has CIDR [10.244.1.0/24] 
	I1101 11:22:28.940440       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1101 11:22:28.940446       1 main.go:324] Node ha-472819-m03 has CIDR [10.244.2.0/24] 
	I1101 11:22:38.933759       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 11:22:38.933796       1 main.go:301] handling current node
	I1101 11:22:38.933812       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1101 11:22:38.933819       1 main.go:324] Node ha-472819-m02 has CIDR [10.244.1.0/24] 
	I1101 11:22:38.933954       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1101 11:22:38.933969       1 main.go:324] Node ha-472819-m03 has CIDR [10.244.2.0/24] 
	I1101 11:22:38.934025       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1101 11:22:38.934036       1 main.go:324] Node ha-472819-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8] <==
	I1101 11:09:40.797825       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:09:40.850033       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:09:40.915102       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 11:09:40.924959       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1101 11:09:40.926249       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 11:09:40.931594       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 11:09:41.095631       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 11:09:41.859882       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 11:09:41.878887       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 11:09:41.887812       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 11:09:46.450412       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 11:09:47.188658       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:09:47.202210       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:09:47.251429       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 11:12:37.069649       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1101 11:12:37.755999       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39660: use of closed network connection
	E1101 11:12:38.787350       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39724: use of closed network connection
	E1101 11:12:39.032436       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39744: use of closed network connection
	E1101 11:12:39.263792       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39766: use of closed network connection
	E1101 11:12:39.693531       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39796: use of closed network connection
	E1101 11:12:40.115015       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39810: use of closed network connection
	E1101 11:12:40.338287       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39828: use of closed network connection
	E1101 11:12:40.547346       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39850: use of closed network connection
	E1101 11:12:40.760719       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39870: use of closed network connection
	I1101 11:19:38.988023       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6bf6ea4411cda5dbfef374975a27f08c60164beec1853c8ba8df3c4f23b6c666] <==
	I1101 11:09:46.183504       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472819" podCIDRs=["10.244.0.0/24"]
	I1101 11:09:46.190773       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 11:10:18.068385       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472819-m02\" does not exist"
	I1101 11:10:18.086957       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472819-m02" podCIDRs=["10.244.1.0/24"]
	I1101 11:10:21.154242       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472819-m02"
	I1101 11:10:30.317774       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tql2r EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tql2r\": the object has been modified; please apply your changes to the latest version and try again"
	I1101 11:10:30.318055       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e71f1e87-b843-4235-9d7d-ceeca6034661", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tql2r EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tql2r": the object has been modified; please apply your changes to the latest version and try again
	I1101 11:10:31.155699       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1101 11:11:42.459308       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-drx6q failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-drx6q\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1101 11:11:42.471104       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-drx6q failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-drx6q\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1101 11:11:42.992046       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472819-m03\" does not exist"
	I1101 11:11:43.069592       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-472819-m03" podCIDRs=["10.244.2.0/24"]
	I1101 11:11:46.211739       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472819-m03"
	E1101 11:12:59.200871       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-mdtrv failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-mdtrv\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1101 11:12:59.200944       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-mdtrv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-mdtrv\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1101 11:12:59.416895       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-472819-m04\" does not exist"
	E1101 11:12:59.620317       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-472819-m04\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.3.0/24\",\"10.244.4.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-472819-m04" podCIDRs=["10.244.3.0/24"]
	E1101 11:12:59.620451       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-472819-m04\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.3.0/24\",\"10.244.4.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-472819-m04"
	E1101 11:12:59.620535       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'ha-472819-m04': failed to patch node CIDR: Node \"ha-472819-m04\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.3.0/24\",\"10.244.4.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	E1101 11:12:59.854362       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"c68c3d5f-200a-4729-99a6-399d13923da3\", ResourceVersion:\"902\", Generation:1, CreationTimestamp:time.Date(2025, time.November, 1, 11, 9, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40019ffba0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\",
Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(
nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4002d4d580), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e12768), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeS
ource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolu
meSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e12780), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualD
iskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.34.1\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0x4002c98ab0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Resou
rceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecyc
le:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0x4002a864e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0x4002d266f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4002f47a70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", Tolerat
ionSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4002d0b960)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4002d26750)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:
3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1101 11:13:01.243524       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-472819-m04"
	I1101 11:13:42.484705       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-472819-m04"
	I1101 11:14:56.316641       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-472819-m04"
	I1101 11:19:56.579535       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-x679v"
	
	
	==> kube-proxy [7078104c50ff2f92f7e2c1df5b91f0bd0cf730fe4a2b36f8082f1d451dd65225] <==
	I1101 11:09:48.742762       1 server_linux.go:53] "Using iptables proxy"
	I1101 11:09:48.836468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:09:48.950591       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:09:48.950696       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 11:09:48.950821       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:09:49.042998       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:09:49.043148       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:09:49.123164       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:09:49.123454       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:09:49.123477       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:09:49.125197       1 config.go:200] "Starting service config controller"
	I1101 11:09:49.125224       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:09:49.125242       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:09:49.125247       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:09:49.125257       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:09:49.125261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:09:49.126013       1 config.go:309] "Starting node config controller"
	I1101 11:09:49.126032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:09:49.126039       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:09:49.225716       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:09:49.225755       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:09:49.225779       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f940f08b4a7e5f2a89503aec05980619c7af103b702262fe033b3ddbff81a5db] <==
	I1101 11:12:33.814471       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-7m8cp" node="ha-472819-m03"
	E1101 11:12:59.527016       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j9j6f\": pod kube-proxy-j9j6f is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j9j6f" node="ha-472819-m04"
	E1101 11:12:59.527158       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 82874d15-7a7e-4291-bdfe-322ff3beceb7(kube-system/kube-proxy-j9j6f) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-j9j6f"
	E1101 11:12:59.527232       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j9j6f\": pod kube-proxy-j9j6f is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-j9j6f"
	I1101 11:12:59.531550       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j9j6f" node="ha-472819-m04"
	E1101 11:12:59.593960       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-r2qzc\": pod kindnet-r2qzc is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-r2qzc" node="ha-472819-m04"
	E1101 11:12:59.594130       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5cf8f544-dc95-4924-b3df-2e668d7cd5bd(kube-system/kindnet-r2qzc) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-r2qzc"
	E1101 11:12:59.594206       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-r2qzc\": pod kindnet-r2qzc is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kindnet-r2qzc"
	I1101 11:12:59.597504       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-r2qzc" node="ha-472819-m04"
	E1101 11:12:59.662573       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hdmgp\": pod kindnet-hdmgp is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hdmgp" node="ha-472819-m04"
	E1101 11:12:59.662698       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5db403fb-b722-43b5-a7f8-72eb2cb15ab8(kube-system/kindnet-hdmgp) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-hdmgp"
	E1101 11:12:59.662754       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hdmgp\": pod kindnet-hdmgp is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kindnet-hdmgp"
	I1101 11:12:59.672405       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hdmgp" node="ha-472819-m04"
	E1101 11:12:59.723505       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-88sf2\": pod kindnet-88sf2 is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-88sf2" node="ha-472819-m04"
	E1101 11:12:59.723658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 61cdb567-0db6-43a9-b37e-206c4b1e424b(kube-system/kindnet-88sf2) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-88sf2"
	E1101 11:12:59.723717       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-88sf2\": pod kindnet-88sf2 is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kindnet-88sf2"
	I1101 11:12:59.724870       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-88sf2" node="ha-472819-m04"
	E1101 11:12:59.725681       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-chm7z\": pod kube-proxy-chm7z is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-chm7z" node="ha-472819-m04"
	E1101 11:12:59.725836       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 91c0e8b9-9d13-45a7-b93c-cbc34b19bbf2(kube-system/kube-proxy-chm7z) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-chm7z"
	E1101 11:12:59.725898       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-chm7z\": pod kube-proxy-chm7z is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-chm7z"
	I1101 11:12:59.727068       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-chm7z" node="ha-472819-m04"
	E1101 11:19:56.681886       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x67zv\": pod busybox-7b57f96db7-x67zv is already assigned to node \"ha-472819-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-x67zv" node="ha-472819-m04"
	E1101 11:19:56.681946       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 26e53bec-ba78-49cd-9271-6982e344344b(default/busybox-7b57f96db7-x67zv) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-x67zv"
	E1101 11:19:56.681966       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-x67zv\": pod busybox-7b57f96db7-x67zv is already assigned to node \"ha-472819-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-x67zv"
	I1101 11:19:56.683079       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-x67zv" node="ha-472819-m04"
	
	
	==> kubelet <==
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376661    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c010b85-48bd-4004-886f-fbe4e03884a9-xtables-lock\") pod \"kube-proxy-djfvb\" (UID: \"2c010b85-48bd-4004-886f-fbe4e03884a9\") " pod="kube-system/kube-proxy-djfvb"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376718    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c010b85-48bd-4004-886f-fbe4e03884a9-lib-modules\") pod \"kube-proxy-djfvb\" (UID: \"2c010b85-48bd-4004-886f-fbe4e03884a9\") " pod="kube-system/kube-proxy-djfvb"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376737    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcg8\" (UniqueName: \"kubernetes.io/projected/2c010b85-48bd-4004-886f-fbe4e03884a9-kube-api-access-zwcg8\") pod \"kube-proxy-djfvb\" (UID: \"2c010b85-48bd-4004-886f-fbe4e03884a9\") " pod="kube-system/kube-proxy-djfvb"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376798    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abb3d05e-e447-4fe5-8996-26e79d7e2b4d-xtables-lock\") pod \"kindnet-dkhrw\" (UID: \"abb3d05e-e447-4fe5-8996-26e79d7e2b4d\") " pod="kube-system/kindnet-dkhrw"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376817    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56qtk\" (UniqueName: \"kubernetes.io/projected/abb3d05e-e447-4fe5-8996-26e79d7e2b4d-kube-api-access-56qtk\") pod \"kindnet-dkhrw\" (UID: \"abb3d05e-e447-4fe5-8996-26e79d7e2b4d\") " pod="kube-system/kindnet-dkhrw"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376871    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/abb3d05e-e447-4fe5-8996-26e79d7e2b4d-cni-cfg\") pod \"kindnet-dkhrw\" (UID: \"abb3d05e-e447-4fe5-8996-26e79d7e2b4d\") " pod="kube-system/kindnet-dkhrw"
	Nov 01 11:09:47 ha-472819 kubelet[1341]: I1101 11:09:47.376891    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abb3d05e-e447-4fe5-8996-26e79d7e2b4d-lib-modules\") pod \"kindnet-dkhrw\" (UID: \"abb3d05e-e447-4fe5-8996-26e79d7e2b4d\") " pod="kube-system/kindnet-dkhrw"
	Nov 01 11:09:48 ha-472819 kubelet[1341]: I1101 11:09:48.435774    1341 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 11:09:48 ha-472819 kubelet[1341]: I1101 11:09:48.997603    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dkhrw" podStartSLOduration=1.997582561 podStartE2EDuration="1.997582561s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:09:48.987766505 +0000 UTC m=+7.290989791" watchObservedRunningTime="2025-11-01 11:09:48.997582561 +0000 UTC m=+7.300805838"
	Nov 01 11:09:50 ha-472819 kubelet[1341]: I1101 11:09:50.808988    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-djfvb" podStartSLOduration=3.808968293 podStartE2EDuration="3.808968293s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:09:49.024949256 +0000 UTC m=+7.328172541" watchObservedRunningTime="2025-11-01 11:09:50.808968293 +0000 UTC m=+9.112191562"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.183523    1341 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.306931    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbgz\" (UniqueName: \"kubernetes.io/projected/17503733-2ab6-460c-aa3f-21d031c70abd-kube-api-access-kpbgz\") pod \"coredns-66bc5c9577-bntfw\" (UID: \"17503733-2ab6-460c-aa3f-21d031c70abd\") " pod="kube-system/coredns-66bc5c9577-bntfw"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.307142    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/18119b45-4932-4521-b0e9-e3a73bc6d3b1-tmp\") pod \"storage-provisioner\" (UID: \"18119b45-4932-4521-b0e9-e3a73bc6d3b1\") " pod="kube-system/storage-provisioner"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.307230    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17503733-2ab6-460c-aa3f-21d031c70abd-config-volume\") pod \"coredns-66bc5c9577-bntfw\" (UID: \"17503733-2ab6-460c-aa3f-21d031c70abd\") " pod="kube-system/coredns-66bc5c9577-bntfw"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.307322    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltf68\" (UniqueName: \"kubernetes.io/projected/18119b45-4932-4521-b0e9-e3a73bc6d3b1-kube-api-access-ltf68\") pod \"storage-provisioner\" (UID: \"18119b45-4932-4521-b0e9-e3a73bc6d3b1\") " pod="kube-system/storage-provisioner"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.408072    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6711b0-f71a-421e-922d-eb44266c95a4-config-volume\") pod \"coredns-66bc5c9577-n2tp2\" (UID: \"4b6711b0-f71a-421e-922d-eb44266c95a4\") " pod="kube-system/coredns-66bc5c9577-n2tp2"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: I1101 11:10:29.408314    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfm8p\" (UniqueName: \"kubernetes.io/projected/4b6711b0-f71a-421e-922d-eb44266c95a4-kube-api-access-gfm8p\") pod \"coredns-66bc5c9577-n2tp2\" (UID: \"4b6711b0-f71a-421e-922d-eb44266c95a4\") " pod="kube-system/coredns-66bc5c9577-n2tp2"
	Nov 01 11:10:29 ha-472819 kubelet[1341]: W1101 11:10:29.609774    1341 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio-f161ed77d020465b8012f1a83590dec691bc6100c6055b30c7d61753e2d2be2a WatchSource:0}: Error finding container f161ed77d020465b8012f1a83590dec691bc6100c6055b30c7d61753e2d2be2a: Status 404 returned error can't find the container with id f161ed77d020465b8012f1a83590dec691bc6100c6055b30c7d61753e2d2be2a
	Nov 01 11:10:29 ha-472819 kubelet[1341]: W1101 11:10:29.613683    1341 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio-2c45f2568b0e8e33cb1da636920d9b841b29c754a967265ee7a2ff1803ba718d WatchSource:0}: Error finding container 2c45f2568b0e8e33cb1da636920d9b841b29c754a967265ee7a2ff1803ba718d: Status 404 returned error can't find the container with id 2c45f2568b0e8e33cb1da636920d9b841b29c754a967265ee7a2ff1803ba718d
	Nov 01 11:10:30 ha-472819 kubelet[1341]: I1101 11:10:30.116802    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bntfw" podStartSLOduration=43.116782296 podStartE2EDuration="43.116782296s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:10:30.116630032 +0000 UTC m=+48.419853300" watchObservedRunningTime="2025-11-01 11:10:30.116782296 +0000 UTC m=+48.420005573"
	Nov 01 11:10:30 ha-472819 kubelet[1341]: I1101 11:10:30.270805    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.270784831 podStartE2EDuration="43.270784831s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:10:30.247572064 +0000 UTC m=+48.550795357" watchObservedRunningTime="2025-11-01 11:10:30.270784831 +0000 UTC m=+48.574008108"
	Nov 01 11:12:33 ha-472819 kubelet[1341]: I1101 11:12:33.798528    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n2tp2" podStartSLOduration=166.798500202 podStartE2EDuration="2m46.798500202s" podCreationTimestamp="2025-11-01 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:10:30.272373486 +0000 UTC m=+48.575596780" watchObservedRunningTime="2025-11-01 11:12:33.798500202 +0000 UTC m=+172.101723635"
	Nov 01 11:12:33 ha-472819 kubelet[1341]: I1101 11:12:33.919051    1341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqgfw\" (UniqueName: \"kubernetes.io/projected/3faf7e64-22cf-4338-92ef-39a2978dacb5-kube-api-access-dqgfw\") pod \"busybox-7b57f96db7-lm6r8\" (UID: \"3faf7e64-22cf-4338-92ef-39a2978dacb5\") " pod="default/busybox-7b57f96db7-lm6r8"
	Nov 01 11:12:34 ha-472819 kubelet[1341]: W1101 11:12:34.180348    1341 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio-1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6 WatchSource:0}: Error finding container 1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6: Status 404 returned error can't find the container with id 1d1abc560619e7aa1a8b60798b93f19527128629e10f8828a25552f3c73770b6
	Nov 01 11:12:36 ha-472819 kubelet[1341]: I1101 11:12:36.528629    1341 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-lm6r8" podStartSLOduration=1.504563771 podStartE2EDuration="3.528612432s" podCreationTimestamp="2025-11-01 11:12:33 +0000 UTC" firstStartedPulling="2025-11-01 11:12:34.188294132 +0000 UTC m=+172.491517409" lastFinishedPulling="2025-11-01 11:12:36.212342793 +0000 UTC m=+174.515566070" observedRunningTime="2025-11-01 11:12:36.528156057 +0000 UTC m=+174.831379350" watchObservedRunningTime="2025-11-01 11:12:36.528612432 +0000 UTC m=+174.831835701"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-472819 -n ha-472819
helpers_test.go:269: (dbg) Run:  kubectl --context ha-472819 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-276323 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-276323 --output=json --user=testUser: exit status 80 (1.814768107s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"84ddea2d-6f40-4e8b-98b8-6c6623ff9811","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-276323 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"cb719fd7-c6bb-4a67-959a-314027894f17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T11:30:26Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"5bbf78d6-29b1-4380-bf10-e991ecbb84f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-276323 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.82s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-276323 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-276323 --output=json --user=testUser: exit status 80 (1.665919512s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c576532-1754-4d17-98d5-c1590a8c9a2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-276323 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a586d332-9dc1-4d33-82a4-4e3323a4c59a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T11:30:28Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"acd90754-4576-49bc-9887-660e5057171b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-276323 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.67s)

                                                
                                    
x
+
TestPause/serial/Pause (8.37s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-482771 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-482771 --alsologtostderr -v=5: exit status 80 (2.448092274s)

                                                
                                                
-- stdout --
	* Pausing node pause-482771 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:54:07.392580  700930 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:54:07.393272  700930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:54:07.393286  700930 out.go:374] Setting ErrFile to fd 2...
	I1101 11:54:07.393291  700930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:54:07.393585  700930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:54:07.393963  700930 out.go:368] Setting JSON to false
	I1101 11:54:07.393990  700930 mustload.go:66] Loading cluster: pause-482771
	I1101 11:54:07.394438  700930 config.go:182] Loaded profile config "pause-482771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:54:07.394889  700930 cli_runner.go:164] Run: docker container inspect pause-482771 --format={{.State.Status}}
	I1101 11:54:07.412089  700930 host.go:66] Checking if "pause-482771" exists ...
	I1101 11:54:07.412408  700930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:54:07.490539  700930 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 11:54:07.481015378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:54:07.491232  700930 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-482771 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 11:54:07.494193  700930 out.go:179] * Pausing node pause-482771 ... 
	I1101 11:54:07.497855  700930 host.go:66] Checking if "pause-482771" exists ...
	I1101 11:54:07.498207  700930 ssh_runner.go:195] Run: systemctl --version
	I1101 11:54:07.498253  700930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-482771
	I1101 11:54:07.518216  700930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/pause-482771/id_rsa Username:docker}
	I1101 11:54:07.620429  700930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:54:07.634676  700930 pause.go:52] kubelet running: true
	I1101 11:54:07.634799  700930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 11:54:07.884966  700930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 11:54:07.885108  700930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 11:54:07.950987  700930 cri.go:89] found id: "a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b"
	I1101 11:54:07.951010  700930 cri.go:89] found id: "d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9"
	I1101 11:54:07.951015  700930 cri.go:89] found id: "aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0"
	I1101 11:54:07.951019  700930 cri.go:89] found id: "caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a"
	I1101 11:54:07.951023  700930 cri.go:89] found id: "e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835"
	I1101 11:54:07.951026  700930 cri.go:89] found id: "d27b59c7134b6455fef2bb0926a50a60c57f67a22cb6dda3ce22c06d5c2e597a"
	I1101 11:54:07.951029  700930 cri.go:89] found id: "0c423f58739ae3c1d8fdc82a3358ed864a553ef4089d3cffa5080a5c59f84fa7"
	I1101 11:54:07.951033  700930 cri.go:89] found id: "9f67935511fe85dccad10f4bacd987b015a57f5e84c1a9bf33d2c3f228c42bee"
	I1101 11:54:07.951036  700930 cri.go:89] found id: "16914a0c9df1d75f9a1e62945a4fa0498edf458829970c78db0f7e6f3c6a9512"
	I1101 11:54:07.951042  700930 cri.go:89] found id: "dc23516676917d642b6c16ed300d1e45e346ba79c17785272f14488c5247ba27"
	I1101 11:54:07.951046  700930 cri.go:89] found id: "5a32cc49f9c585fd9a10fe9e5020e8f2d59dd62e9171d6be93f489bd161d5f0a"
	I1101 11:54:07.951049  700930 cri.go:89] found id: "91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856"
	I1101 11:54:07.951052  700930 cri.go:89] found id: "3b9b4a780447f349b49e46daf0010a349f42158f3b1e36e3eeba375f8c1a4b25"
	I1101 11:54:07.951055  700930 cri.go:89] found id: "727279a73ea7c29fdf4409bd58a498ff5bc8b4b7e350cc84e50148bf0271ad3d"
	I1101 11:54:07.951059  700930 cri.go:89] found id: ""
	I1101 11:54:07.951111  700930 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 11:54:07.961995  700930 retry.go:31] will retry after 236.030249ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:54:07Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:54:08.198410  700930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:54:08.211103  700930 pause.go:52] kubelet running: false
	I1101 11:54:08.211175  700930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 11:54:08.349901  700930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 11:54:08.350003  700930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 11:54:08.422775  700930 cri.go:89] found id: "a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b"
	I1101 11:54:08.422800  700930 cri.go:89] found id: "d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9"
	I1101 11:54:08.422805  700930 cri.go:89] found id: "aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0"
	I1101 11:54:08.422809  700930 cri.go:89] found id: "caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a"
	I1101 11:54:08.422812  700930 cri.go:89] found id: "e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835"
	I1101 11:54:08.422817  700930 cri.go:89] found id: "d27b59c7134b6455fef2bb0926a50a60c57f67a22cb6dda3ce22c06d5c2e597a"
	I1101 11:54:08.422820  700930 cri.go:89] found id: "0c423f58739ae3c1d8fdc82a3358ed864a553ef4089d3cffa5080a5c59f84fa7"
	I1101 11:54:08.422828  700930 cri.go:89] found id: "9f67935511fe85dccad10f4bacd987b015a57f5e84c1a9bf33d2c3f228c42bee"
	I1101 11:54:08.422831  700930 cri.go:89] found id: "16914a0c9df1d75f9a1e62945a4fa0498edf458829970c78db0f7e6f3c6a9512"
	I1101 11:54:08.422837  700930 cri.go:89] found id: "dc23516676917d642b6c16ed300d1e45e346ba79c17785272f14488c5247ba27"
	I1101 11:54:08.422841  700930 cri.go:89] found id: "5a32cc49f9c585fd9a10fe9e5020e8f2d59dd62e9171d6be93f489bd161d5f0a"
	I1101 11:54:08.422844  700930 cri.go:89] found id: "91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856"
	I1101 11:54:08.422848  700930 cri.go:89] found id: "3b9b4a780447f349b49e46daf0010a349f42158f3b1e36e3eeba375f8c1a4b25"
	I1101 11:54:08.422851  700930 cri.go:89] found id: "727279a73ea7c29fdf4409bd58a498ff5bc8b4b7e350cc84e50148bf0271ad3d"
	I1101 11:54:08.422853  700930 cri.go:89] found id: ""
	I1101 11:54:08.422902  700930 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 11:54:08.433433  700930 retry.go:31] will retry after 402.976212ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:54:08Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:54:08.837085  700930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:54:08.850695  700930 pause.go:52] kubelet running: false
	I1101 11:54:08.850774  700930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 11:54:08.999324  700930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 11:54:08.999465  700930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 11:54:09.073272  700930 cri.go:89] found id: "a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b"
	I1101 11:54:09.073296  700930 cri.go:89] found id: "d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9"
	I1101 11:54:09.073301  700930 cri.go:89] found id: "aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0"
	I1101 11:54:09.073305  700930 cri.go:89] found id: "caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a"
	I1101 11:54:09.073308  700930 cri.go:89] found id: "e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835"
	I1101 11:54:09.073311  700930 cri.go:89] found id: "d27b59c7134b6455fef2bb0926a50a60c57f67a22cb6dda3ce22c06d5c2e597a"
	I1101 11:54:09.073314  700930 cri.go:89] found id: "0c423f58739ae3c1d8fdc82a3358ed864a553ef4089d3cffa5080a5c59f84fa7"
	I1101 11:54:09.073318  700930 cri.go:89] found id: "9f67935511fe85dccad10f4bacd987b015a57f5e84c1a9bf33d2c3f228c42bee"
	I1101 11:54:09.073321  700930 cri.go:89] found id: "16914a0c9df1d75f9a1e62945a4fa0498edf458829970c78db0f7e6f3c6a9512"
	I1101 11:54:09.073327  700930 cri.go:89] found id: "dc23516676917d642b6c16ed300d1e45e346ba79c17785272f14488c5247ba27"
	I1101 11:54:09.073331  700930 cri.go:89] found id: "5a32cc49f9c585fd9a10fe9e5020e8f2d59dd62e9171d6be93f489bd161d5f0a"
	I1101 11:54:09.073334  700930 cri.go:89] found id: "91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856"
	I1101 11:54:09.073352  700930 cri.go:89] found id: "3b9b4a780447f349b49e46daf0010a349f42158f3b1e36e3eeba375f8c1a4b25"
	I1101 11:54:09.073359  700930 cri.go:89] found id: "727279a73ea7c29fdf4409bd58a498ff5bc8b4b7e350cc84e50148bf0271ad3d"
	I1101 11:54:09.073362  700930 cri.go:89] found id: ""
	I1101 11:54:09.073411  700930 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 11:54:09.084426  700930 retry.go:31] will retry after 396.412553ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:54:09Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:54:09.481083  700930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:54:09.495970  700930 pause.go:52] kubelet running: false
	I1101 11:54:09.496087  700930 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 11:54:09.659655  700930 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 11:54:09.659757  700930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 11:54:09.743563  700930 cri.go:89] found id: "a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b"
	I1101 11:54:09.743640  700930 cri.go:89] found id: "d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9"
	I1101 11:54:09.743672  700930 cri.go:89] found id: "aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0"
	I1101 11:54:09.743709  700930 cri.go:89] found id: "caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a"
	I1101 11:54:09.743747  700930 cri.go:89] found id: "e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835"
	I1101 11:54:09.743765  700930 cri.go:89] found id: "d27b59c7134b6455fef2bb0926a50a60c57f67a22cb6dda3ce22c06d5c2e597a"
	I1101 11:54:09.743784  700930 cri.go:89] found id: "0c423f58739ae3c1d8fdc82a3358ed864a553ef4089d3cffa5080a5c59f84fa7"
	I1101 11:54:09.743804  700930 cri.go:89] found id: "9f67935511fe85dccad10f4bacd987b015a57f5e84c1a9bf33d2c3f228c42bee"
	I1101 11:54:09.743843  700930 cri.go:89] found id: "16914a0c9df1d75f9a1e62945a4fa0498edf458829970c78db0f7e6f3c6a9512"
	I1101 11:54:09.743865  700930 cri.go:89] found id: "dc23516676917d642b6c16ed300d1e45e346ba79c17785272f14488c5247ba27"
	I1101 11:54:09.743884  700930 cri.go:89] found id: "5a32cc49f9c585fd9a10fe9e5020e8f2d59dd62e9171d6be93f489bd161d5f0a"
	I1101 11:54:09.743904  700930 cri.go:89] found id: "91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856"
	I1101 11:54:09.743937  700930 cri.go:89] found id: "3b9b4a780447f349b49e46daf0010a349f42158f3b1e36e3eeba375f8c1a4b25"
	I1101 11:54:09.743958  700930 cri.go:89] found id: "727279a73ea7c29fdf4409bd58a498ff5bc8b4b7e350cc84e50148bf0271ad3d"
	I1101 11:54:09.743978  700930 cri.go:89] found id: ""
	I1101 11:54:09.744059  700930 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 11:54:09.761918  700930 out.go:203] 
	W1101 11:54:09.765084  700930 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:54:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:54:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 11:54:09.765149  700930 out.go:285] * 
	* 
	W1101 11:54:09.773189  700930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 11:54:09.776210  700930 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-482771 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-482771
helpers_test.go:243: (dbg) docker inspect pause-482771:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3",
	        "Created": "2025-11-01T11:52:19.372006523Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 694519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:52:19.443691134Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3/hosts",
	        "LogPath": "/var/lib/docker/containers/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3-json.log",
	        "Name": "/pause-482771",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-482771:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-482771",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3",
	                "LowerDir": "/var/lib/docker/overlay2/e690c754a101ec61f905be9d9a4619a9db3e01785983caa6adf21d793a1c0013-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e690c754a101ec61f905be9d9a4619a9db3e01785983caa6adf21d793a1c0013/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e690c754a101ec61f905be9d9a4619a9db3e01785983caa6adf21d793a1c0013/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e690c754a101ec61f905be9d9a4619a9db3e01785983caa6adf21d793a1c0013/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-482771",
	                "Source": "/var/lib/docker/volumes/pause-482771/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-482771",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-482771",
	                "name.minikube.sigs.k8s.io": "pause-482771",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "40ec72c81e83ed1a5ef664c2ab620e92d491bce95a9c4b08daf050437e5c058a",
	            "SandboxKey": "/var/run/docker/netns/40ec72c81e83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33750"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33751"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33754"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33752"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33753"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-482771": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:25:45:28:70:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c9a2dd9b03a4f7d446a2069a1390dd74ecf9fb19546f75f063fcd2c8ff7169a8",
	                    "EndpointID": "f30aec4d304279a70cabe24781c4f7a841a6429d0ddbc502250dd164b1b3c061",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-482771",
	                        "1c8f48982980"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-482771 -n pause-482771
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-482771 -n pause-482771: exit status 2 (366.749605ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-482771 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-482771 logs -n 25: (1.483238821s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-656070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:47 UTC │ 01 Nov 25 11:49 UTC │
	│ start   │ -p missing-upgrade-598273 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-598273    │ jenkins │ v1.37.0 │ 01 Nov 25 11:47 UTC │ 01 Nov 25 11:48 UTC │
	│ delete  │ -p missing-upgrade-598273                                                                                                                │ missing-upgrade-598273    │ jenkins │ v1.37.0 │ 01 Nov 25 11:48 UTC │ 01 Nov 25 11:48 UTC │
	│ start   │ -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:48 UTC │ 01 Nov 25 11:49 UTC │
	│ stop    │ -p kubernetes-upgrade-396779                                                                                                             │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │ 01 Nov 25 11:49 UTC │
	│ start   │ -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │ 01 Nov 25 11:53 UTC │
	│ delete  │ -p NoKubernetes-656070                                                                                                                   │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │ 01 Nov 25 11:49 UTC │
	│ start   │ -p NoKubernetes-656070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │ 01 Nov 25 11:49 UTC │
	│ ssh     │ -p NoKubernetes-656070 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │                     │
	│ stop    │ -p NoKubernetes-656070                                                                                                                   │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:50 UTC │ 01 Nov 25 11:50 UTC │
	│ start   │ -p NoKubernetes-656070 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:50 UTC │ 01 Nov 25 11:50 UTC │
	│ ssh     │ -p NoKubernetes-656070 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:50 UTC │                     │
	│ delete  │ -p NoKubernetes-656070                                                                                                                   │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:50 UTC │ 01 Nov 25 11:50 UTC │
	│ start   │ -p stopped-upgrade-043825 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-043825    │ jenkins │ v1.32.0 │ 01 Nov 25 11:50 UTC │ 01 Nov 25 11:51 UTC │
	│ stop    │ stopped-upgrade-043825 stop                                                                                                              │ stopped-upgrade-043825    │ jenkins │ v1.32.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:51 UTC │
	│ start   │ -p stopped-upgrade-043825 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-043825    │ jenkins │ v1.37.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:51 UTC │
	│ delete  │ -p stopped-upgrade-043825                                                                                                                │ stopped-upgrade-043825    │ jenkins │ v1.37.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:51 UTC │
	│ start   │ -p running-upgrade-496459 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-496459    │ jenkins │ v1.32.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:51 UTC │
	│ start   │ -p running-upgrade-496459 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-496459    │ jenkins │ v1.37.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:52 UTC │
	│ delete  │ -p running-upgrade-496459                                                                                                                │ running-upgrade-496459    │ jenkins │ v1.37.0 │ 01 Nov 25 11:52 UTC │ 01 Nov 25 11:52 UTC │
	│ start   │ -p pause-482771 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-482771              │ jenkins │ v1.37.0 │ 01 Nov 25 11:52 UTC │ 01 Nov 25 11:53 UTC │
	│ start   │ -p pause-482771 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-482771              │ jenkins │ v1.37.0 │ 01 Nov 25 11:53 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:53 UTC │                     │
	│ pause   │ -p pause-482771 --alsologtostderr -v=5                                                                                                   │ pause-482771              │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:53:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:53:47.224467  699286 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:53:47.224580  699286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:53:47.224591  699286 out.go:374] Setting ErrFile to fd 2...
	I1101 11:53:47.224595  699286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:53:47.224960  699286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:53:47.225404  699286 out.go:368] Setting JSON to false
	I1101 11:53:47.226707  699286 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12977,"bootTime":1761985051,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:53:47.226815  699286 start.go:143] virtualization:  
	I1101 11:53:47.230249  699286 out.go:179] * [kubernetes-upgrade-396779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:53:47.234410  699286 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:53:47.234482  699286 notify.go:221] Checking for updates...
	I1101 11:53:47.240577  699286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:53:47.243554  699286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:53:47.247292  699286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:53:47.250254  699286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:53:47.253320  699286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:53:47.257041  699286 config.go:182] Loaded profile config "kubernetes-upgrade-396779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:53:47.257858  699286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:53:47.298533  699286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:53:47.298789  699286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:53:47.408924  699286 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 11:53:47.399520813 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:53:47.409125  699286 docker.go:319] overlay module found
	I1101 11:53:47.412593  699286 out.go:179] * Using the docker driver based on existing profile
	I1101 11:53:47.415767  699286 start.go:309] selected driver: docker
	I1101 11:53:47.415827  699286 start.go:930] validating driver "docker" against &{Name:kubernetes-upgrade-396779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:53:47.415965  699286 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:53:47.416697  699286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:53:47.529138  699286 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 11:53:47.520323858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:53:47.529474  699286 cni.go:84] Creating CNI manager for ""
	I1101 11:53:47.529530  699286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:53:47.529573  699286 start.go:353] cluster config:
	{Name:kubernetes-upgrade-396779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:53:47.532946  699286 out.go:179] * Starting "kubernetes-upgrade-396779" primary control-plane node in "kubernetes-upgrade-396779" cluster
	I1101 11:53:47.537267  699286 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:53:47.540289  699286 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:53:47.543284  699286 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:53:47.543337  699286 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 11:53:47.543346  699286 cache.go:59] Caching tarball of preloaded images
	I1101 11:53:47.543423  699286 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:53:47.543431  699286 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:53:47.543559  699286 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/config.json ...
	I1101 11:53:47.543760  699286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:53:47.571601  699286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:53:47.571620  699286 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:53:47.571632  699286 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:53:47.571660  699286 start.go:360] acquireMachinesLock for kubernetes-upgrade-396779: {Name:mk9bd955707603a39df009911c13a21a1beee843 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:53:47.571707  699286 start.go:364] duration metric: took 30.86µs to acquireMachinesLock for "kubernetes-upgrade-396779"
	I1101 11:53:47.571736  699286 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:53:47.571741  699286 fix.go:54] fixHost starting: 
	I1101 11:53:47.571998  699286 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-396779 --format={{.State.Status}}
	I1101 11:53:47.602594  699286 fix.go:112] recreateIfNeeded on kubernetes-upgrade-396779: state=Running err=<nil>
	W1101 11:53:47.602623  699286 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:53:45.339019  698628 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:53:45.339106  698628 machine.go:97] duration metric: took 6.660382594s to provisionDockerMachine
	I1101 11:53:45.339135  698628 start.go:293] postStartSetup for "pause-482771" (driver="docker")
	I1101 11:53:45.339184  698628 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:53:45.339304  698628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:53:45.339396  698628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-482771
	I1101 11:53:45.387793  698628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/pause-482771/id_rsa Username:docker}
	I1101 11:53:45.516074  698628 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:53:45.522192  698628 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:53:45.522227  698628 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:53:45.522242  698628 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:53:45.522314  698628 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:53:45.522405  698628 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:53:45.522529  698628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:53:45.539182  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:53:45.572497  698628 start.go:296] duration metric: took 233.310795ms for postStartSetup
	I1101 11:53:45.572598  698628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:53:45.572645  698628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-482771
	I1101 11:53:45.602904  698628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/pause-482771/id_rsa Username:docker}
	I1101 11:53:45.707320  698628 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:53:45.713261  698628 fix.go:56] duration metric: took 7.080360155s for fixHost
	I1101 11:53:45.713284  698628 start.go:83] releasing machines lock for "pause-482771", held for 7.080448369s
	I1101 11:53:45.713353  698628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-482771
	I1101 11:53:45.735611  698628 ssh_runner.go:195] Run: cat /version.json
	I1101 11:53:45.735672  698628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-482771
	I1101 11:53:45.735615  698628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:53:45.735749  698628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-482771
	I1101 11:53:45.785432  698628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/pause-482771/id_rsa Username:docker}
	I1101 11:53:45.785420  698628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/pause-482771/id_rsa Username:docker}
	I1101 11:53:46.111281  698628 ssh_runner.go:195] Run: systemctl --version
	I1101 11:53:46.118269  698628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:53:46.204466  698628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:53:46.210347  698628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:53:46.210448  698628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:53:46.219174  698628 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 11:53:46.219201  698628 start.go:496] detecting cgroup driver to use...
	I1101 11:53:46.219232  698628 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:53:46.219281  698628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:53:46.242390  698628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:53:46.266459  698628 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:53:46.266525  698628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:53:46.287354  698628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:53:46.302167  698628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:53:46.541126  698628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:53:46.748981  698628 docker.go:234] disabling docker service ...
	I1101 11:53:46.749049  698628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:53:46.766120  698628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:53:46.781597  698628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:53:46.999264  698628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:53:47.189261  698628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:53:47.204271  698628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:53:47.231290  698628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:53:47.231356  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.244176  698628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:53:47.244243  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.254840  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.265916  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.275632  698628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:53:47.284715  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.296741  698628 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.306309  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.319554  698628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:53:47.341010  698628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:53:47.358705  698628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:47.679445  698628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:53:47.911173  698628 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:53:47.911240  698628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:53:47.915368  698628 start.go:564] Will wait 60s for crictl version
	I1101 11:53:47.915425  698628 ssh_runner.go:195] Run: which crictl
	I1101 11:53:47.925068  698628 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:53:47.953552  698628 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:53:47.953641  698628 ssh_runner.go:195] Run: crio --version
	I1101 11:53:47.995847  698628 ssh_runner.go:195] Run: crio --version
	I1101 11:53:48.049719  698628 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:53:48.053152  698628 cli_runner.go:164] Run: docker network inspect pause-482771 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:53:48.077471  698628 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 11:53:48.086057  698628 kubeadm.go:884] updating cluster {Name:pause-482771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-482771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:53:48.086226  698628 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:53:48.086291  698628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:53:48.138698  698628 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:53:48.138722  698628 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:53:48.138783  698628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:53:48.168260  698628 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:53:48.168280  698628 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:53:48.168288  698628 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 11:53:48.168385  698628 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-482771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-482771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:53:48.168462  698628 ssh_runner.go:195] Run: crio config
	I1101 11:53:48.243310  698628 cni.go:84] Creating CNI manager for ""
	I1101 11:53:48.243378  698628 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:53:48.243415  698628 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:53:48.243476  698628 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-482771 NodeName:pause-482771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:53:48.243665  698628 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-482771"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:53:48.243761  698628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:53:48.262233  698628 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:53:48.262326  698628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:53:48.274479  698628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 11:53:48.291862  698628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:53:48.307112  698628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 11:53:48.322802  698628 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:53:48.331472  698628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:48.594328  698628 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:53:48.623005  698628 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771 for IP: 192.168.76.2
	I1101 11:53:48.623040  698628 certs.go:195] generating shared ca certs ...
	I1101 11:53:48.623057  698628 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:48.623191  698628 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:53:48.623248  698628 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:53:48.623260  698628 certs.go:257] generating profile certs ...
	I1101 11:53:48.623343  698628 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.key
	I1101 11:53:48.623408  698628 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/apiserver.key.cb01ebdb
	I1101 11:53:48.623459  698628 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/proxy-client.key
	I1101 11:53:48.623573  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:53:48.623606  698628 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:53:48.623617  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:53:48.623640  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:53:48.623671  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:53:48.623696  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:53:48.623743  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:53:48.624320  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:53:48.653380  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:53:48.683249  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:53:48.713866  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:53:48.746297  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 11:53:48.821274  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:53:48.907339  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:53:49.052831  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:53:49.174152  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:53:49.225049  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:53:49.295585  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:53:49.368949  698628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:53:49.423496  698628 ssh_runner.go:195] Run: openssl version
	I1101 11:53:49.456112  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:53:49.490253  698628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:53:49.497784  698628 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:53:49.497878  698628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:53:49.653937  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:53:49.692763  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:53:49.712554  698628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:49.736882  698628 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:49.736963  698628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:49.872962  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:53:49.908076  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:53:49.932141  698628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:53:49.942959  698628 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:53:49.943041  698628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:53:50.038938  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:53:50.058060  698628 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:53:50.070097  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:53:50.198985  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:53:50.326877  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:53:50.436554  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:53:50.573489  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:53:50.684229  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:53:50.808718  698628 kubeadm.go:401] StartCluster: {Name:pause-482771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-482771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:53:50.808829  698628 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:53:50.808905  698628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:53:50.919552  698628 cri.go:89] found id: "a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b"
	I1101 11:53:50.919575  698628 cri.go:89] found id: "d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9"
	I1101 11:53:50.919580  698628 cri.go:89] found id: "aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0"
	I1101 11:53:50.919584  698628 cri.go:89] found id: "caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a"
	I1101 11:53:50.919587  698628 cri.go:89] found id: "e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835"
	I1101 11:53:50.919591  698628 cri.go:89] found id: "d27b59c7134b6455fef2bb0926a50a60c57f67a22cb6dda3ce22c06d5c2e597a"
	I1101 11:53:50.919603  698628 cri.go:89] found id: "0c423f58739ae3c1d8fdc82a3358ed864a553ef4089d3cffa5080a5c59f84fa7"
	I1101 11:53:50.919606  698628 cri.go:89] found id: "9f67935511fe85dccad10f4bacd987b015a57f5e84c1a9bf33d2c3f228c42bee"
	I1101 11:53:50.919609  698628 cri.go:89] found id: "16914a0c9df1d75f9a1e62945a4fa0498edf458829970c78db0f7e6f3c6a9512"
	I1101 11:53:50.919616  698628 cri.go:89] found id: "dc23516676917d642b6c16ed300d1e45e346ba79c17785272f14488c5247ba27"
	I1101 11:53:50.919619  698628 cri.go:89] found id: "5a32cc49f9c585fd9a10fe9e5020e8f2d59dd62e9171d6be93f489bd161d5f0a"
	I1101 11:53:50.919622  698628 cri.go:89] found id: "91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856"
	I1101 11:53:50.919625  698628 cri.go:89] found id: "3b9b4a780447f349b49e46daf0010a349f42158f3b1e36e3eeba375f8c1a4b25"
	I1101 11:53:50.919628  698628 cri.go:89] found id: "727279a73ea7c29fdf4409bd58a498ff5bc8b4b7e350cc84e50148bf0271ad3d"
	I1101 11:53:50.919631  698628 cri.go:89] found id: ""
	I1101 11:53:50.919681  698628 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 11:53:50.967790  698628 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:53:50Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:53:50.967863  698628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:53:50.987304  698628 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:53:50.987327  698628 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:53:50.987397  698628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:53:51.004707  698628 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:53:51.005471  698628 kubeconfig.go:125] found "pause-482771" server: "https://192.168.76.2:8443"
	I1101 11:53:51.006528  698628 kapi.go:59] client config for pause-482771: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:53:51.007067  698628 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 11:53:51.007079  698628 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 11:53:51.007085  698628 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 11:53:51.007090  698628 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 11:53:51.007095  698628 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 11:53:51.008594  698628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:53:51.029184  698628 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 11:53:51.029290  698628 kubeadm.go:602] duration metric: took 41.952038ms to restartPrimaryControlPlane
	I1101 11:53:51.029314  698628 kubeadm.go:403] duration metric: took 220.605738ms to StartCluster
	I1101 11:53:51.029343  698628 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:51.029446  698628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:53:51.030543  698628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:51.030849  698628 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:53:51.031483  698628 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:53:51.031614  698628 config.go:182] Loaded profile config "pause-482771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:53:51.035156  698628 out.go:179] * Enabled addons: 
	I1101 11:53:51.035276  698628 out.go:179] * Verifying Kubernetes components...
	I1101 11:53:47.605838  699286 out.go:252] * Updating the running docker "kubernetes-upgrade-396779" container ...
	I1101 11:53:47.605872  699286 machine.go:94] provisionDockerMachine start ...
	I1101 11:53:47.605962  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:47.632876  699286 main.go:143] libmachine: Using SSH client type: native
	I1101 11:53:47.633198  699286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I1101 11:53:47.633207  699286 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:53:47.805206  699286 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-396779
	
	I1101 11:53:47.805277  699286 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-396779"
	I1101 11:53:47.805363  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:47.825536  699286 main.go:143] libmachine: Using SSH client type: native
	I1101 11:53:47.825889  699286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I1101 11:53:47.825906  699286 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-396779 && echo "kubernetes-upgrade-396779" | sudo tee /etc/hostname
	I1101 11:53:47.996775  699286 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-396779
	
	I1101 11:53:47.996858  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:48.023107  699286 main.go:143] libmachine: Using SSH client type: native
	I1101 11:53:48.026532  699286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I1101 11:53:48.026574  699286 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-396779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-396779/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-396779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:53:48.202095  699286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:53:48.202124  699286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:53:48.202151  699286 ubuntu.go:190] setting up certificates
	I1101 11:53:48.202166  699286 provision.go:84] configureAuth start
	I1101 11:53:48.202230  699286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-396779
	I1101 11:53:48.238215  699286 provision.go:143] copyHostCerts
	I1101 11:53:48.238303  699286 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:53:48.238325  699286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:53:48.238429  699286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:53:48.238572  699286 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:53:48.238586  699286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:53:48.238635  699286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:53:48.238767  699286 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:53:48.238782  699286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:53:48.238820  699286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:53:48.238885  699286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-396779 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-396779 localhost minikube]
	I1101 11:53:48.451571  699286 provision.go:177] copyRemoteCerts
	I1101 11:53:48.451709  699286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:53:48.451808  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:48.492077  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:48.667400  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:53:48.722448  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 11:53:48.761112  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 11:53:48.800081  699286 provision.go:87] duration metric: took 597.889442ms to configureAuth
	I1101 11:53:48.800111  699286 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:53:48.800336  699286 config.go:182] Loaded profile config "kubernetes-upgrade-396779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:53:48.800477  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:48.827614  699286 main.go:143] libmachine: Using SSH client type: native
	I1101 11:53:48.827915  699286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I1101 11:53:48.827929  699286 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:53:49.841117  699286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:53:49.841134  699286 machine.go:97] duration metric: took 2.23525355s to provisionDockerMachine
	I1101 11:53:49.841145  699286 start.go:293] postStartSetup for "kubernetes-upgrade-396779" (driver="docker")
	I1101 11:53:49.841155  699286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:53:49.841236  699286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:53:49.841280  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:49.871830  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:50.003067  699286 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:53:50.014576  699286 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:53:50.014605  699286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:53:50.014618  699286 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:53:50.014702  699286 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:53:50.014786  699286 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:53:50.014955  699286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:53:50.032583  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:53:50.071825  699286 start.go:296] duration metric: took 230.664665ms for postStartSetup
	I1101 11:53:50.071982  699286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:53:50.072060  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:50.104351  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:50.227557  699286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:53:50.234575  699286 fix.go:56] duration metric: took 2.662826277s for fixHost
	I1101 11:53:50.234597  699286 start.go:83] releasing machines lock for "kubernetes-upgrade-396779", held for 2.662881925s
	I1101 11:53:50.234669  699286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-396779
	I1101 11:53:50.260587  699286 ssh_runner.go:195] Run: cat /version.json
	I1101 11:53:50.260638  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:50.260902  699286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:53:50.260950  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:50.296930  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:50.297954  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:50.708886  699286 ssh_runner.go:195] Run: systemctl --version
	I1101 11:53:50.752697  699286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:53:50.938124  699286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:53:50.948667  699286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:53:50.948748  699286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:53:50.974122  699286 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 11:53:50.974143  699286 start.go:496] detecting cgroup driver to use...
	I1101 11:53:50.974175  699286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:53:50.974222  699286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:53:51.001721  699286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:53:51.043854  699286 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:53:51.043965  699286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:53:51.087194  699286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:53:51.120272  699286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:53:51.487393  699286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:53:51.911030  699286 docker.go:234] disabling docker service ...
	I1101 11:53:51.911152  699286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:53:51.957240  699286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:53:51.986244  699286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:53:51.038168  698628 addons.go:515] duration metric: took 6.684894ms for enable addons: enabled=[]
	I1101 11:53:51.038309  698628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:51.493310  698628 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:53:51.529795  698628 node_ready.go:35] waiting up to 6m0s for node "pause-482771" to be "Ready" ...
	I1101 11:53:52.375169  699286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:53:52.766091  699286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:53:52.811924  699286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:53:52.884129  699286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:53:52.884244  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:52.914127  699286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:53:52.914276  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:52.942287  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:52.971705  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:53.002408  699286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:53:53.037530  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:53.080268  699286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:53.118101  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:53.140806  699286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:53:53.162118  699286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:53:53.179050  699286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:53.535056  699286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:53:53.864329  699286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:53:53.864447  699286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:53:53.868832  699286 start.go:564] Will wait 60s for crictl version
	I1101 11:53:53.868967  699286 ssh_runner.go:195] Run: which crictl
	I1101 11:53:53.872954  699286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:53:53.908858  699286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:53:53.909014  699286 ssh_runner.go:195] Run: crio --version
	I1101 11:53:53.947639  699286 ssh_runner.go:195] Run: crio --version
	I1101 11:53:54.003570  699286 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:53:54.007384  699286 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-396779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:53:54.030476  699286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 11:53:54.034611  699286 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-396779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:53:54.034719  699286 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:53:54.034776  699286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:53:54.104951  699286 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:53:54.104970  699286 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:53:54.105026  699286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:53:54.161307  699286 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:53:54.161331  699286 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:53:54.161340  699286 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 11:53:54.161449  699286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-396779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:53:54.161543  699286 ssh_runner.go:195] Run: crio config
	I1101 11:53:54.298824  699286 cni.go:84] Creating CNI manager for ""
	I1101 11:53:54.298847  699286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:53:54.298896  699286 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:53:54.298927  699286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-396779 NodeName:kubernetes-upgrade-396779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:53:54.299108  699286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-396779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:53:54.299212  699286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:53:54.319647  699286 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:53:54.319736  699286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:53:54.334217  699286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1101 11:53:54.349630  699286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:53:54.363950  699286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1101 11:53:54.396853  699286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:53:54.408644  699286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:54.683473  699286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:53:54.707092  699286 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779 for IP: 192.168.85.2
	I1101 11:53:54.707114  699286 certs.go:195] generating shared ca certs ...
	I1101 11:53:54.707130  699286 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:54.707341  699286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:53:54.707412  699286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:53:54.707427  699286 certs.go:257] generating profile certs ...
	I1101 11:53:54.707547  699286 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.key
	I1101 11:53:54.707613  699286 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/apiserver.key.890ff25d
	I1101 11:53:54.707675  699286 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/proxy-client.key
	I1101 11:53:54.707835  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:53:54.707893  699286 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:53:54.707909  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:53:54.707950  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:53:54.707998  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:53:54.708035  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:53:54.708099  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:53:54.708737  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:53:54.752345  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:53:54.797446  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:53:54.827500  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:53:54.864576  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 11:53:54.890940  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:53:54.921418  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:53:54.954891  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:53:54.987464  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:53:55.026445  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:53:55.058531  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:53:55.091663  699286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:53:55.117687  699286 ssh_runner.go:195] Run: openssl version
	I1101 11:53:55.126330  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:53:55.139053  699286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:55.144023  699286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:55.144124  699286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:55.207788  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:53:55.216510  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:53:55.231108  699286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:53:55.237212  699286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:53:55.237309  699286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:53:55.292423  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:53:55.301296  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:53:55.310807  699286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:53:55.314926  699286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:53:55.315019  699286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:53:55.360384  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:53:55.369480  699286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:53:55.373485  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:53:55.418561  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:53:55.460153  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:53:55.516180  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:53:55.562646  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:53:55.608005  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:53:55.652402  699286 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-396779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:53:55.652480  699286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:53:55.652579  699286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:53:55.693104  699286 cri.go:89] found id: "ba303fe218d2dfe039337cc65b8665e6208fcb97a0bfa6bf2fe8fb6efdaba7f0"
	I1101 11:53:55.693127  699286 cri.go:89] found id: "0dad010db6c2f9d17e6850c7aea098c9d98ddc616227a8bb6390ae9e6b2ccac0"
	I1101 11:53:55.693131  699286 cri.go:89] found id: "da3db3ef3f554bbb32ae8828dbacc4cf249c61034dc04a4d1738b5c3225e9dff"
	I1101 11:53:55.693136  699286 cri.go:89] found id: "8ff009219f7e8d56a017921144b72ef1c95e24ea074786a4942d2a0354251638"
	I1101 11:53:55.693139  699286 cri.go:89] found id: ""
	I1101 11:53:55.693218  699286 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 11:53:55.704566  699286 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:53:55Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:53:55.704681  699286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:53:55.712944  699286 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:53:55.712964  699286 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:53:55.713043  699286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:53:55.722293  699286 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:53:55.723007  699286 kubeconfig.go:125] found "kubernetes-upgrade-396779" server: "https://192.168.85.2:8443"
	I1101 11:53:55.739896  699286 kapi.go:59] client config for kubernetes-upgrade-396779: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:53:55.740427  699286 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 11:53:55.740441  699286 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 11:53:55.740446  699286 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 11:53:55.740451  699286 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 11:53:55.740455  699286 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 11:53:55.740739  699286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:53:55.752954  699286 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 11:53:55.752985  699286 kubeadm.go:602] duration metric: took 40.015406ms to restartPrimaryControlPlane
	I1101 11:53:55.752993  699286 kubeadm.go:403] duration metric: took 100.600349ms to StartCluster
	I1101 11:53:55.753008  699286 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:55.753068  699286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:53:55.754098  699286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:55.754334  699286 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:53:55.754563  699286 config.go:182] Loaded profile config "kubernetes-upgrade-396779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:53:55.754627  699286 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:53:55.754705  699286 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-396779"
	I1101 11:53:55.754724  699286 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-396779"
	W1101 11:53:55.754737  699286 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:53:55.754759  699286 host.go:66] Checking if "kubernetes-upgrade-396779" exists ...
	I1101 11:53:55.755249  699286 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-396779 --format={{.State.Status}}
	I1101 11:53:55.755717  699286 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-396779"
	I1101 11:53:55.755741  699286 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-396779"
	I1101 11:53:55.756065  699286 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-396779 --format={{.State.Status}}
	I1101 11:53:55.763743  699286 out.go:179] * Verifying Kubernetes components...
	I1101 11:53:55.769805  699286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:55.798235  699286 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:53:55.799794  699286 kapi.go:59] client config for kubernetes-upgrade-396779: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:53:55.800092  699286 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-396779"
	W1101 11:53:55.800104  699286 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:53:55.800128  699286 host.go:66] Checking if "kubernetes-upgrade-396779" exists ...
	I1101 11:53:55.800529  699286 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-396779 --format={{.State.Status}}
	I1101 11:53:55.802934  699286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:53:55.802960  699286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:53:55.803024  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:55.837963  699286 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:55.837982  699286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:53:55.838049  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:55.845965  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:55.875313  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:56.051530  699286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:53:56.052794  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:53:56.075212  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:56.130552  699286 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:53:56.130698  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:56.270763  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.270815  699286 retry.go:31] will retry after 254.045518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 11:53:56.270872  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.270884  699286 retry.go:31] will retry after 140.865867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.412272  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:56.525856  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:53:56.535204  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.535236  699286 retry.go:31] will retry after 349.443428ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.630875  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:56.634920  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.635001  699286 retry.go:31] will retry after 488.85168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.884927  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 11:53:56.974750  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.974792  699286 retry.go:31] will retry after 451.681607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.124078  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:53:57.131533  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:57.205942  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.206057  699286 retry.go:31] will retry after 431.089549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.372200  698628 node_ready.go:49] node "pause-482771" is "Ready"
	I1101 11:53:56.372226  698628 node_ready.go:38] duration metric: took 4.842401899s for node "pause-482771" to be "Ready" ...
	I1101 11:53:56.372240  698628 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:53:56.372300  698628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:53:56.392370  698628 api_server.go:72] duration metric: took 5.36145211s to wait for apiserver process to appear ...
	I1101 11:53:56.392392  698628 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:53:56.392412  698628 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 11:53:56.411380  698628 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:53:56.411413  698628 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:53:56.893026  698628 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 11:53:56.907751  698628 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:53:56.907795  698628 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:53:57.393399  698628 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 11:53:57.402908  698628 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:53:57.402933  698628 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:53:57.892630  698628 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 11:53:57.900842  698628 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 11:53:57.902018  698628 api_server.go:141] control plane version: v1.34.1
	I1101 11:53:57.902046  698628 api_server.go:131] duration metric: took 1.509647262s to wait for apiserver health ...
	I1101 11:53:57.902055  698628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:53:57.905495  698628 system_pods.go:59] 7 kube-system pods found
	I1101 11:53:57.905538  698628 system_pods.go:61] "coredns-66bc5c9577-49sg2" [5b31d9e2-1052-4646-a62f-7adc7c2d045c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:53:57.905548  698628 system_pods.go:61] "etcd-pause-482771" [7c0221c3-61a4-484e-8cfb-08fa19deb2cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:53:57.905553  698628 system_pods.go:61] "kindnet-xscmv" [ddeffdc3-3ed3-40ea-8b90-931a1aee6317] Running
	I1101 11:53:57.905560  698628 system_pods.go:61] "kube-apiserver-pause-482771" [ee664fdb-2fd8-4f99-947c-2885a3f74227] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:53:57.905567  698628 system_pods.go:61] "kube-controller-manager-pause-482771" [be707719-1b56-46cf-827c-481f64c7da47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:53:57.905576  698628 system_pods.go:61] "kube-proxy-c22qb" [d0861096-6955-4968-aa01-324237dd0609] Running
	I1101 11:53:57.905583  698628 system_pods.go:61] "kube-scheduler-pause-482771" [b15c318f-8138-47a0-94d3-d8a0a6b7fad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:53:57.905597  698628 system_pods.go:74] duration metric: took 3.535684ms to wait for pod list to return data ...
	I1101 11:53:57.905608  698628 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:53:57.908128  698628 default_sa.go:45] found service account: "default"
	I1101 11:53:57.908159  698628 default_sa.go:55] duration metric: took 2.545214ms for default service account to be created ...
	I1101 11:53:57.908169  698628 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:53:57.910995  698628 system_pods.go:86] 7 kube-system pods found
	I1101 11:53:57.911028  698628 system_pods.go:89] "coredns-66bc5c9577-49sg2" [5b31d9e2-1052-4646-a62f-7adc7c2d045c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:53:57.911038  698628 system_pods.go:89] "etcd-pause-482771" [7c0221c3-61a4-484e-8cfb-08fa19deb2cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:53:57.911085  698628 system_pods.go:89] "kindnet-xscmv" [ddeffdc3-3ed3-40ea-8b90-931a1aee6317] Running
	I1101 11:53:57.911093  698628 system_pods.go:89] "kube-apiserver-pause-482771" [ee664fdb-2fd8-4f99-947c-2885a3f74227] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:53:57.911101  698628 system_pods.go:89] "kube-controller-manager-pause-482771" [be707719-1b56-46cf-827c-481f64c7da47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:53:57.911111  698628 system_pods.go:89] "kube-proxy-c22qb" [d0861096-6955-4968-aa01-324237dd0609] Running
	I1101 11:53:57.911119  698628 system_pods.go:89] "kube-scheduler-pause-482771" [b15c318f-8138-47a0-94d3-d8a0a6b7fad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:53:57.911144  698628 system_pods.go:126] duration metric: took 2.966742ms to wait for k8s-apps to be running ...
	I1101 11:53:57.911161  698628 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:53:57.911231  698628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:53:57.924318  698628 system_svc.go:56] duration metric: took 13.147935ms WaitForService to wait for kubelet
	I1101 11:53:57.924346  698628 kubeadm.go:587] duration metric: took 6.893434502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:53:57.924376  698628 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:53:57.927529  698628 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:53:57.927564  698628 node_conditions.go:123] node cpu capacity is 2
	I1101 11:53:57.927576  698628 node_conditions.go:105] duration metric: took 3.19497ms to run NodePressure ...
	I1101 11:53:57.927589  698628 start.go:242] waiting for startup goroutines ...
	I1101 11:53:57.927596  698628 start.go:247] waiting for cluster config update ...
	I1101 11:53:57.927604  698628 start.go:256] writing updated cluster config ...
	I1101 11:53:57.927914  698628 ssh_runner.go:195] Run: rm -f paused
	I1101 11:53:57.931665  698628 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:53:57.932332  698628 kapi.go:59] client config for pause-482771: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:53:57.935431  698628 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-49sg2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:53:57.426689  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 11:53:57.515206  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.515302  699286 retry.go:31] will retry after 561.90039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.631284  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:53:57.637793  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:53:57.701072  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.701104  699286 retry.go:31] will retry after 795.170356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.077668  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:58.131295  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:58.148817  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.148896  699286 retry.go:31] will retry after 978.703539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.497271  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:53:58.562095  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.562164  699286 retry.go:31] will retry after 1.672012903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.631484  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:53:59.128626  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:59.131110  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:59.196662  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:59.196688  699286 retry.go:31] will retry after 1.150881874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:59.630779  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:00.137137  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:00.236769  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:54:00.349772  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 11:54:00.360806  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:00.360915  699286 retry.go:31] will retry after 1.952791589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 11:54:00.473311  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:00.473427  699286 retry.go:31] will retry after 3.908754672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:00.631795  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:01.131270  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:01.631162  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:02.131687  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:59.941744  698628 pod_ready.go:104] pod "coredns-66bc5c9577-49sg2" is not "Ready", error: <nil>
	I1101 11:54:01.441148  698628 pod_ready.go:94] pod "coredns-66bc5c9577-49sg2" is "Ready"
	I1101 11:54:01.441180  698628 pod_ready.go:86] duration metric: took 3.5057231s for pod "coredns-66bc5c9577-49sg2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:01.443538  698628 pod_ready.go:83] waiting for pod "etcd-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.949755  698628 pod_ready.go:94] pod "etcd-pause-482771" is "Ready"
	I1101 11:54:02.949827  698628 pod_ready.go:86] duration metric: took 1.506257198s for pod "etcd-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.952276  698628 pod_ready.go:83] waiting for pod "kube-apiserver-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.956778  698628 pod_ready.go:94] pod "kube-apiserver-pause-482771" is "Ready"
	I1101 11:54:02.956805  698628 pod_ready.go:86] duration metric: took 4.503122ms for pod "kube-apiserver-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.959275  698628 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.314769  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:54:02.382919  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:02.382955  699286 retry.go:31] will retry after 3.654222907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:02.631296  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:03.131233  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:03.631073  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:04.131578  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:04.382832  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 11:54:04.512433  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:04.512474  699286 retry.go:31] will retry after 3.209864376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:04.630747  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:05.131134  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:05.630867  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:06.037825  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:54:06.097067  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:06.097102  699286 retry.go:31] will retry after 4.86929524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:06.131403  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:06.631086  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:07.131617  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:54:04.967039  698628 pod_ready.go:104] pod "kube-controller-manager-pause-482771" is not "Ready", error: <nil>
	I1101 11:54:06.965373  698628 pod_ready.go:94] pod "kube-controller-manager-pause-482771" is "Ready"
	I1101 11:54:06.965398  698628 pod_ready.go:86] duration metric: took 4.006050681s for pod "kube-controller-manager-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:06.968070  698628 pod_ready.go:83] waiting for pod "kube-proxy-c22qb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:06.974381  698628 pod_ready.go:94] pod "kube-proxy-c22qb" is "Ready"
	I1101 11:54:06.974416  698628 pod_ready.go:86] duration metric: took 6.323462ms for pod "kube-proxy-c22qb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:06.976755  698628 pod_ready.go:83] waiting for pod "kube-scheduler-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:07.238730  698628 pod_ready.go:94] pod "kube-scheduler-pause-482771" is "Ready"
	I1101 11:54:07.238757  698628 pod_ready.go:86] duration metric: took 261.976403ms for pod "kube-scheduler-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:07.238769  698628 pod_ready.go:40] duration metric: took 9.307071156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:54:07.297857  698628 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 11:54:07.300878  698628 out.go:179] * Done! kubectl is now configured to use "pause-482771" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.250491643Z" level=info msg="Created container aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0: kube-system/coredns-66bc5c9577-49sg2/coredns" id=ebfe642d-0d14-4405-964e-540b80a91087 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.257870995Z" level=info msg="Created container caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a: kube-system/kube-proxy-c22qb/kube-proxy" id=77f91192-ca64-4cae-a890-db6205e11605 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.258436024Z" level=info msg="Started container" PID=2261 containerID=e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835 description=kube-system/kindnet-xscmv/kindnet-cni id=2fc40b6e-d168-4d6f-9adc-f04ee577a10a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b855ba137c91b8167e492b75549d185759d3c89bff7600b2b2ed8d11a93383a1
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.270304221Z" level=info msg="Starting container: aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0" id=9325fa2a-3653-4a37-9191-b66e3e89552b name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.271031197Z" level=info msg="Starting container: caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a" id=033769db-4453-4deb-abe7-d51fa3ba7c70 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.274439191Z" level=info msg="Created container d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9: kube-system/kube-controller-manager-pause-482771/kube-controller-manager" id=ba0fe261-8a81-41da-82fa-c5a540f24abd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.28687954Z" level=info msg="Started container" PID=2270 containerID=caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a description=kube-system/kube-proxy-c22qb/kube-proxy id=033769db-4453-4deb-abe7-d51fa3ba7c70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb48dcff20368cc29326549bafaf66b3dd57f0e9b114f34f4c032ed582df4658
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.294414037Z" level=info msg="Starting container: d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9" id=16a9ac3a-4955-4b56-af17-feacf0a9cbca name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.306127689Z" level=info msg="Started container" PID=2273 containerID=aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0 description=kube-system/coredns-66bc5c9577-49sg2/coredns id=9325fa2a-3653-4a37-9191-b66e3e89552b name=/runtime.v1.RuntimeService/StartContainer sandboxID=c97c1949f2d243cf5e0375dc9328490f6bafbfa7f84fbff9eba24d9944ee59bb
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.316248418Z" level=info msg="Created container a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b: kube-system/etcd-pause-482771/etcd" id=9ccbbaef-0c73-497e-ae22-c4aed8c48190 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.322501823Z" level=info msg="Starting container: a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b" id=abb80776-5dc9-4ced-bc9d-1d5eb2c05103 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.333270716Z" level=info msg="Started container" PID=2284 containerID=d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9 description=kube-system/kube-controller-manager-pause-482771/kube-controller-manager id=16a9ac3a-4955-4b56-af17-feacf0a9cbca name=/runtime.v1.RuntimeService/StartContainer sandboxID=a317857f3407a3091a6247b4a5f8ecbf443ad5a0dbd803fa0b586a8cf39b884f
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.340946663Z" level=info msg="Started container" PID=2319 containerID=a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b description=kube-system/etcd-pause-482771/etcd id=abb80776-5dc9-4ced-bc9d-1d5eb2c05103 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4b9e623bf705e4a9e2c5bd0635aa161ea602bf6d4edba4c72009d4c56462bcb2
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.772952559Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.776518841Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.776555765Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.776577878Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.780035104Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.780074095Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.78009451Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.783343675Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.78337895Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.783401809Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.787455228Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.787491708Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a7b6614083cf2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   4b9e623bf705e       etcd-pause-482771                      kube-system
	d239515eba866       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   a317857f3407a       kube-controller-manager-pause-482771   kube-system
	aa9b1fe68a325       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   c97c1949f2d24       coredns-66bc5c9577-49sg2               kube-system
	caefba313f65a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   eb48dcff20368       kube-proxy-c22qb                       kube-system
	e0cc97d39f883       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   b855ba137c91b       kindnet-xscmv                          kube-system
	d27b59c7134b6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   4ab6c741da7b9       kube-apiserver-pause-482771            kube-system
	0c423f58739ae       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   91fcc0aef761b       kube-scheduler-pause-482771            kube-system
	9f67935511fe8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   c97c1949f2d24       coredns-66bc5c9577-49sg2               kube-system
	16914a0c9df1d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   eb48dcff20368       kube-proxy-c22qb                       kube-system
	dc23516676917       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   b855ba137c91b       kindnet-xscmv                          kube-system
	5a32cc49f9c58       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   4ab6c741da7b9       kube-apiserver-pause-482771            kube-system
	91d6ed15f3167       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   4b9e623bf705e       etcd-pause-482771                      kube-system
	3b9b4a780447f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   91fcc0aef761b       kube-scheduler-pause-482771            kube-system
	727279a73ea7c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   a317857f3407a       kube-controller-manager-pause-482771   kube-system
	
	
	==> coredns [9f67935511fe85dccad10f4bacd987b015a57f5e84c1a9bf33d2c3f228c42bee] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56615 - 1102 "HINFO IN 221121602160979795.876064981112779147. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010305609s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51619 - 38278 "HINFO IN 4304995760194375860.86542241889092373. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.021833338s
	
	
	==> describe nodes <==
	Name:               pause-482771
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-482771
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=pause-482771
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_52_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-482771
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:54:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:53:58 +0000   Sat, 01 Nov 2025 11:52:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:53:58 +0000   Sat, 01 Nov 2025 11:52:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:53:58 +0000   Sat, 01 Nov 2025 11:52:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:53:58 +0000   Sat, 01 Nov 2025 11:53:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-482771
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                436bd671-e9ad-45d7-a076-791b029e6c70
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-49sg2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-482771                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-xscmv                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-482771             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-482771    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-c22qb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-482771             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 76s   kube-proxy       
	  Normal   Starting                 14s   kube-proxy       
	  Normal   Starting                 83s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s   kubelet          Node pause-482771 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s   kubelet          Node pause-482771 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s   kubelet          Node pause-482771 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s   node-controller  Node pause-482771 event: Registered Node pause-482771 in Controller
	  Normal   NodeReady                36s   kubelet          Node pause-482771 status is now: NodeReady
	  Normal   RegisteredNode           11s   node-controller  Node pause-482771 event: Registered Node pause-482771 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:26] overlayfs: idmapped layers are currently not supported
	[  +2.957169] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:27] overlayfs: idmapped layers are currently not supported
	[ +46.322577] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:34] overlayfs: idmapped layers are currently not supported
	[ +35.784283] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856] <==
	{"level":"warn","ts":"2025-11-01T11:52:42.980351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.025017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.081635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.142556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.169892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.199409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.337751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34776","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T11:53:40.033759Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T11:53:40.033815Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-482771","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-01T11:53:40.057175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T11:53:40.220874Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T11:53:40.221020Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:53:40.221069Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-01T11:53:40.221215Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T11:53:40.221266Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T11:53:40.221515Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T11:53:40.221571Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T11:53:40.221606Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T11:53:40.221709Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T11:53:40.221747Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T11:53:40.221782Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:53:40.224475Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-01T11:53:40.224596Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:53:40.224648Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T11:53:40.224700Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-482771","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b] <==
	{"level":"warn","ts":"2025-11-01T11:53:54.242399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.291776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.348162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.389099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.441805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.478584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.538418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.576502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.592022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.607342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.631275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.666179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.718576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.719029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.744492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.766050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.781137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.806742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.822571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.844974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.867388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.908286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.937570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.969574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:55.114978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44822","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:54:11 up  3:36,  0 user,  load average: 4.49, 2.85, 2.30
	Linux pause-482771 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dc23516676917d642b6c16ed300d1e45e346ba79c17785272f14488c5247ba27] <==
	I1101 11:52:53.820195       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 11:52:53.820615       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 11:52:53.820829       1 main.go:148] setting mtu 1500 for CNI 
	I1101 11:52:53.820879       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 11:52:53.820919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T11:52:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 11:52:54.022632       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 11:52:54.022708       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 11:52:54.022745       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 11:52:54.023875       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 11:53:24.022799       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 11:53:24.024037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 11:53:24.024041       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 11:53:24.024212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 11:53:25.123637       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 11:53:25.123738       1 metrics.go:72] Registering metrics
	I1101 11:53:25.123866       1 controller.go:711] "Syncing nftables rules"
	I1101 11:53:34.026161       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:53:34.026273       1 main.go:301] handling current node
	
	
	==> kindnet [e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835] <==
	I1101 11:53:49.421424       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 11:53:49.421819       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 11:53:49.421969       1 main.go:148] setting mtu 1500 for CNI 
	I1101 11:53:49.421992       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 11:53:49.422009       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T11:53:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 11:53:49.772087       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 11:53:49.772171       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 11:53:49.772207       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 11:53:49.777482       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 11:53:56.476597       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 11:53:56.476731       1 metrics.go:72] Registering metrics
	I1101 11:53:56.476829       1 controller.go:711] "Syncing nftables rules"
	I1101 11:53:59.772006       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:53:59.772069       1 main.go:301] handling current node
	I1101 11:54:09.772767       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:54:09.772835       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a32cc49f9c585fd9a10fe9e5020e8f2d59dd62e9171d6be93f489bd161d5f0a] <==
	W1101 11:53:40.075738       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075780       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075741       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075631       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075514       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075863       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075895       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075929       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075960       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075992       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076025       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076053       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076081       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076329       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076359       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077220       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077289       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077323       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077354       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077385       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077416       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077445       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077479       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.074804       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d27b59c7134b6455fef2bb0926a50a60c57f67a22cb6dda3ce22c06d5c2e597a] <==
	I1101 11:53:56.385219       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 11:53:56.385287       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 11:53:56.391007       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 11:53:56.391071       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 11:53:56.392654       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 11:53:56.392688       1 aggregator.go:171] initial CRD sync complete...
	I1101 11:53:56.392696       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 11:53:56.392702       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 11:53:56.392706       1 cache.go:39] Caches are synced for autoregister controller
	I1101 11:53:56.392841       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 11:53:56.435798       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:53:56.450725       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 11:53:56.464647       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 11:53:56.466542       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 11:53:56.466609       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 11:53:56.474035       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 11:53:56.492410       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 11:53:56.492499       1 policy_source.go:240] refreshing policies
	I1101 11:53:56.513836       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 11:53:57.164786       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 11:53:58.399580       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 11:53:59.868402       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 11:53:59.969893       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 11:54:00.096713       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 11:54:00.189159       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [727279a73ea7c29fdf4409bd58a498ff5bc8b4b7e350cc84e50148bf0271ad3d] <==
	I1101 11:52:52.277786       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 11:52:52.284852       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 11:52:52.285741       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 11:52:52.287059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 11:52:52.298915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:52:52.298937       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 11:52:52.298944       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 11:52:52.309954       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:52:52.319866       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 11:52:52.319918       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 11:52:52.320319       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 11:52:52.321499       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 11:52:52.321744       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 11:52:52.321901       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 11:52:52.321941       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 11:52:52.322024       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 11:52:52.322095       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 11:52:52.323268       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 11:52:52.323361       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 11:52:52.323557       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 11:52:52.324797       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 11:52:52.324828       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 11:52:52.325949       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 11:52:52.332402       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 11:53:37.281950       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9] <==
	I1101 11:53:59.670435       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 11:53:59.671999       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 11:53:59.672247       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 11:53:59.673805       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 11:53:59.673855       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 11:53:59.673872       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 11:53:59.675336       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 11:53:59.677810       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 11:53:59.679374       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 11:53:59.680494       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 11:53:59.682167       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 11:53:59.683588       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 11:53:59.686059       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 11:53:59.686106       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 11:53:59.691734       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 11:53:59.693922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 11:53:59.711464       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 11:53:59.712202       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 11:53:59.712249       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 11:53:59.712320       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:53:59.712384       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 11:53:59.712415       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 11:53:59.712813       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 11:53:59.712845       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 11:53:59.779567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [16914a0c9df1d75f9a1e62945a4fa0498edf458829970c78db0f7e6f3c6a9512] <==
	I1101 11:52:53.767970       1 server_linux.go:53] "Using iptables proxy"
	I1101 11:52:53.859552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:52:53.960180       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:52:53.960251       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 11:52:53.960323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:52:53.988378       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:52:53.988436       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:52:53.992468       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:52:53.992765       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:52:53.992787       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:52:53.994698       1 config.go:200] "Starting service config controller"
	I1101 11:52:53.994782       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:52:53.994826       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:52:53.994854       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:52:53.995049       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:52:53.995088       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:52:53.997496       1 config.go:309] "Starting node config controller"
	I1101 11:52:53.997568       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:52:53.997604       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:52:54.095541       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:52:54.095551       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 11:52:54.095493       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a] <==
	I1101 11:53:55.328316       1 server_linux.go:53] "Using iptables proxy"
	I1101 11:53:55.919517       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:53:56.505901       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:53:56.505974       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 11:53:56.506046       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:53:56.538730       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:53:56.538785       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:53:56.553867       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:53:56.554277       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:53:56.554335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:53:56.555723       1 config.go:200] "Starting service config controller"
	I1101 11:53:56.555803       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:53:56.555851       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:53:56.560325       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:53:56.556060       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:53:56.560364       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:53:56.556712       1 config.go:309] "Starting node config controller"
	I1101 11:53:56.560374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:53:56.560379       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:53:56.657672       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:53:56.660499       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:53:56.660541       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c423f58739ae3c1d8fdc82a3358ed864a553ef4089d3cffa5080a5c59f84fa7] <==
	I1101 11:53:52.795394       1 serving.go:386] Generated self-signed cert in-memory
	W1101 11:53:56.343595       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 11:53:56.343638       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 11:53:56.343649       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 11:53:56.343657       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 11:53:56.407877       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 11:53:56.407921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:53:56.420275       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 11:53:56.420363       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 11:53:56.420368       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:53:56.420441       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:53:56.522063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [3b9b4a780447f349b49e46daf0010a349f42158f3b1e36e3eeba375f8c1a4b25] <==
	E1101 11:52:45.720785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 11:52:45.724959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 11:52:45.726938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 11:52:45.726991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 11:52:45.727032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 11:52:45.727071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 11:52:45.727115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 11:52:45.727197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 11:52:45.727323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 11:52:45.727413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 11:52:45.727449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 11:52:45.727537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 11:52:45.727578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 11:52:45.727650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 11:52:45.727701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 11:52:45.727975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 11:52:45.728070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 11:52:45.729092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1101 11:52:46.896190       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:53:40.026243       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 11:53:40.026372       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 11:53:40.026390       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 11:53:40.026411       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:53:40.026636       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 11:53:40.026653       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.990152    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-482771\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ce7ce70e8d7b65ff886873cce8964842" pod="kube-system/kube-apiserver-pause-482771"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.990381    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-482771\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="81101c32cb35feaeaf80f1278d075e53" pod="kube-system/kube-controller-manager-pause-482771"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.990726    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-xscmv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ddeffdc3-3ed3-40ea-8b90-931a1aee6317" pod="kube-system/kindnet-xscmv"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.991227    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c22qb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d0861096-6955-4968-aa01-324237dd0609" pod="kube-system/kube-proxy-c22qb"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: I1101 11:53:48.991478    1313 scope.go:117] "RemoveContainer" containerID="91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.992523    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-49sg2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5b31d9e2-1052-4646-a62f-7adc7c2d045c" pod="kube-system/coredns-66bc5c9577-49sg2"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.097938    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.098356    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.098656    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.099196    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.099611    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: I1101 11:53:49.099741    1313 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.100096    1313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="200ms"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.275069    1313 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-482771\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.276055    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-482771\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="ce7ce70e8d7b65ff886873cce8964842" pod="kube-system/kube-apiserver-pause-482771"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.277800    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-482771\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="81101c32cb35feaeaf80f1278d075e53" pod="kube-system/kube-controller-manager-pause-482771"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.334770    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-xscmv\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="ddeffdc3-3ed3-40ea-8b90-931a1aee6317" pod="kube-system/kindnet-xscmv"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.356794    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-c22qb\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="d0861096-6955-4968-aa01-324237dd0609" pod="kube-system/kube-proxy-c22qb"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.370780    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-49sg2\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="5b31d9e2-1052-4646-a62f-7adc7c2d045c" pod="kube-system/coredns-66bc5c9577-49sg2"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.379645    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-482771\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="fad7ceeb4086c6bce1e6a1f1f2d84a76" pod="kube-system/etcd-pause-482771"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.399420    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-482771\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="05f2cfa65e3ab911cf76bf0e3596338d" pod="kube-system/kube-scheduler-pause-482771"
	Nov 01 11:53:57 pause-482771 kubelet[1313]: W1101 11:53:57.732638    1313 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 01 11:54:07 pause-482771 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 11:54:07 pause-482771 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 11:54:07 pause-482771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-482771 -n pause-482771
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-482771 -n pause-482771: exit status 2 (475.415306ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-482771 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-482771
helpers_test.go:243: (dbg) docker inspect pause-482771:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3",
	        "Created": "2025-11-01T11:52:19.372006523Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 694519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:52:19.443691134Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3/hosts",
	        "LogPath": "/var/lib/docker/containers/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3/1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3-json.log",
	        "Name": "/pause-482771",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-482771:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-482771",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c8f489829803ffdf3489a7a8e1949b55dc7792dfee468303e98b07c314728d3",
	                "LowerDir": "/var/lib/docker/overlay2/e690c754a101ec61f905be9d9a4619a9db3e01785983caa6adf21d793a1c0013-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e690c754a101ec61f905be9d9a4619a9db3e01785983caa6adf21d793a1c0013/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e690c754a101ec61f905be9d9a4619a9db3e01785983caa6adf21d793a1c0013/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e690c754a101ec61f905be9d9a4619a9db3e01785983caa6adf21d793a1c0013/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-482771",
	                "Source": "/var/lib/docker/volumes/pause-482771/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-482771",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-482771",
	                "name.minikube.sigs.k8s.io": "pause-482771",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "40ec72c81e83ed1a5ef664c2ab620e92d491bce95a9c4b08daf050437e5c058a",
	            "SandboxKey": "/var/run/docker/netns/40ec72c81e83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33750"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33751"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33754"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33752"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33753"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-482771": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:25:45:28:70:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c9a2dd9b03a4f7d446a2069a1390dd74ecf9fb19546f75f063fcd2c8ff7169a8",
	                    "EndpointID": "f30aec4d304279a70cabe24781c4f7a841a6429d0ddbc502250dd164b1b3c061",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-482771",
	                        "1c8f48982980"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-482771 -n pause-482771
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-482771 -n pause-482771: exit status 2 (447.86479ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-482771 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-482771 logs -n 25: (2.007652386s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-656070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:47 UTC │ 01 Nov 25 11:49 UTC │
	│ start   │ -p missing-upgrade-598273 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-598273    │ jenkins │ v1.37.0 │ 01 Nov 25 11:47 UTC │ 01 Nov 25 11:48 UTC │
	│ delete  │ -p missing-upgrade-598273                                                                                                                │ missing-upgrade-598273    │ jenkins │ v1.37.0 │ 01 Nov 25 11:48 UTC │ 01 Nov 25 11:48 UTC │
	│ start   │ -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:48 UTC │ 01 Nov 25 11:49 UTC │
	│ stop    │ -p kubernetes-upgrade-396779                                                                                                             │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │ 01 Nov 25 11:49 UTC │
	│ start   │ -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │ 01 Nov 25 11:53 UTC │
	│ delete  │ -p NoKubernetes-656070                                                                                                                   │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │ 01 Nov 25 11:49 UTC │
	│ start   │ -p NoKubernetes-656070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │ 01 Nov 25 11:49 UTC │
	│ ssh     │ -p NoKubernetes-656070 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:49 UTC │                     │
	│ stop    │ -p NoKubernetes-656070                                                                                                                   │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:50 UTC │ 01 Nov 25 11:50 UTC │
	│ start   │ -p NoKubernetes-656070 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:50 UTC │ 01 Nov 25 11:50 UTC │
	│ ssh     │ -p NoKubernetes-656070 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:50 UTC │                     │
	│ delete  │ -p NoKubernetes-656070                                                                                                                   │ NoKubernetes-656070       │ jenkins │ v1.37.0 │ 01 Nov 25 11:50 UTC │ 01 Nov 25 11:50 UTC │
	│ start   │ -p stopped-upgrade-043825 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-043825    │ jenkins │ v1.32.0 │ 01 Nov 25 11:50 UTC │ 01 Nov 25 11:51 UTC │
	│ stop    │ stopped-upgrade-043825 stop                                                                                                              │ stopped-upgrade-043825    │ jenkins │ v1.32.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:51 UTC │
	│ start   │ -p stopped-upgrade-043825 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-043825    │ jenkins │ v1.37.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:51 UTC │
	│ delete  │ -p stopped-upgrade-043825                                                                                                                │ stopped-upgrade-043825    │ jenkins │ v1.37.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:51 UTC │
	│ start   │ -p running-upgrade-496459 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-496459    │ jenkins │ v1.32.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:51 UTC │
	│ start   │ -p running-upgrade-496459 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-496459    │ jenkins │ v1.37.0 │ 01 Nov 25 11:51 UTC │ 01 Nov 25 11:52 UTC │
	│ delete  │ -p running-upgrade-496459                                                                                                                │ running-upgrade-496459    │ jenkins │ v1.37.0 │ 01 Nov 25 11:52 UTC │ 01 Nov 25 11:52 UTC │
	│ start   │ -p pause-482771 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-482771              │ jenkins │ v1.37.0 │ 01 Nov 25 11:52 UTC │ 01 Nov 25 11:53 UTC │
	│ start   │ -p pause-482771 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-482771              │ jenkins │ v1.37.0 │ 01 Nov 25 11:53 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-396779 │ jenkins │ v1.37.0 │ 01 Nov 25 11:53 UTC │                     │
	│ pause   │ -p pause-482771 --alsologtostderr -v=5                                                                                                   │ pause-482771              │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:53:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:53:47.224467  699286 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:53:47.224580  699286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:53:47.224591  699286 out.go:374] Setting ErrFile to fd 2...
	I1101 11:53:47.224595  699286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:53:47.224960  699286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:53:47.225404  699286 out.go:368] Setting JSON to false
	I1101 11:53:47.226707  699286 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12977,"bootTime":1761985051,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:53:47.226815  699286 start.go:143] virtualization:  
	I1101 11:53:47.230249  699286 out.go:179] * [kubernetes-upgrade-396779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:53:47.234410  699286 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:53:47.234482  699286 notify.go:221] Checking for updates...
	I1101 11:53:47.240577  699286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:53:47.243554  699286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:53:47.247292  699286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:53:47.250254  699286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:53:47.253320  699286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:53:47.257041  699286 config.go:182] Loaded profile config "kubernetes-upgrade-396779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:53:47.257858  699286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:53:47.298533  699286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:53:47.298789  699286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:53:47.408924  699286 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 11:53:47.399520813 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:53:47.409125  699286 docker.go:319] overlay module found
	I1101 11:53:47.412593  699286 out.go:179] * Using the docker driver based on existing profile
	I1101 11:53:47.415767  699286 start.go:309] selected driver: docker
	I1101 11:53:47.415827  699286 start.go:930] validating driver "docker" against &{Name:kubernetes-upgrade-396779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:53:47.415965  699286 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:53:47.416697  699286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:53:47.529138  699286 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 11:53:47.520323858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:53:47.529474  699286 cni.go:84] Creating CNI manager for ""
	I1101 11:53:47.529530  699286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:53:47.529573  699286 start.go:353] cluster config:
	{Name:kubernetes-upgrade-396779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:53:47.532946  699286 out.go:179] * Starting "kubernetes-upgrade-396779" primary control-plane node in "kubernetes-upgrade-396779" cluster
	I1101 11:53:47.537267  699286 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:53:47.540289  699286 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:53:47.543284  699286 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:53:47.543337  699286 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 11:53:47.543346  699286 cache.go:59] Caching tarball of preloaded images
	I1101 11:53:47.543423  699286 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:53:47.543431  699286 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:53:47.543559  699286 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/config.json ...
	I1101 11:53:47.543760  699286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:53:47.571601  699286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:53:47.571620  699286 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:53:47.571632  699286 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:53:47.571660  699286 start.go:360] acquireMachinesLock for kubernetes-upgrade-396779: {Name:mk9bd955707603a39df009911c13a21a1beee843 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:53:47.571707  699286 start.go:364] duration metric: took 30.86µs to acquireMachinesLock for "kubernetes-upgrade-396779"
	I1101 11:53:47.571736  699286 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:53:47.571741  699286 fix.go:54] fixHost starting: 
	I1101 11:53:47.571998  699286 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-396779 --format={{.State.Status}}
	I1101 11:53:47.602594  699286 fix.go:112] recreateIfNeeded on kubernetes-upgrade-396779: state=Running err=<nil>
	W1101 11:53:47.602623  699286 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:53:45.339019  698628 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:53:45.339106  698628 machine.go:97] duration metric: took 6.660382594s to provisionDockerMachine
	I1101 11:53:45.339135  698628 start.go:293] postStartSetup for "pause-482771" (driver="docker")
	I1101 11:53:45.339184  698628 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:53:45.339304  698628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:53:45.339396  698628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-482771
	I1101 11:53:45.387793  698628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/pause-482771/id_rsa Username:docker}
	I1101 11:53:45.516074  698628 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:53:45.522192  698628 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:53:45.522227  698628 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:53:45.522242  698628 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:53:45.522314  698628 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:53:45.522405  698628 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:53:45.522529  698628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:53:45.539182  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:53:45.572497  698628 start.go:296] duration metric: took 233.310795ms for postStartSetup
	I1101 11:53:45.572598  698628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:53:45.572645  698628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-482771
	I1101 11:53:45.602904  698628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/pause-482771/id_rsa Username:docker}
	I1101 11:53:45.707320  698628 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:53:45.713261  698628 fix.go:56] duration metric: took 7.080360155s for fixHost
	I1101 11:53:45.713284  698628 start.go:83] releasing machines lock for "pause-482771", held for 7.080448369s
	I1101 11:53:45.713353  698628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-482771
	I1101 11:53:45.735611  698628 ssh_runner.go:195] Run: cat /version.json
	I1101 11:53:45.735672  698628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-482771
	I1101 11:53:45.735615  698628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:53:45.735749  698628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-482771
	I1101 11:53:45.785432  698628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/pause-482771/id_rsa Username:docker}
	I1101 11:53:45.785420  698628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/pause-482771/id_rsa Username:docker}
	I1101 11:53:46.111281  698628 ssh_runner.go:195] Run: systemctl --version
	I1101 11:53:46.118269  698628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:53:46.204466  698628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:53:46.210347  698628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:53:46.210448  698628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:53:46.219174  698628 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 11:53:46.219201  698628 start.go:496] detecting cgroup driver to use...
	I1101 11:53:46.219232  698628 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:53:46.219281  698628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:53:46.242390  698628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:53:46.266459  698628 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:53:46.266525  698628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:53:46.287354  698628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:53:46.302167  698628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:53:46.541126  698628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:53:46.748981  698628 docker.go:234] disabling docker service ...
	I1101 11:53:46.749049  698628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:53:46.766120  698628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:53:46.781597  698628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:53:46.999264  698628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:53:47.189261  698628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:53:47.204271  698628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:53:47.231290  698628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:53:47.231356  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.244176  698628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:53:47.244243  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.254840  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.265916  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.275632  698628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:53:47.284715  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.296741  698628 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.306309  698628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:47.319554  698628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:53:47.341010  698628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:53:47.358705  698628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:47.679445  698628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:53:47.911173  698628 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:53:47.911240  698628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:53:47.915368  698628 start.go:564] Will wait 60s for crictl version
	I1101 11:53:47.915425  698628 ssh_runner.go:195] Run: which crictl
	I1101 11:53:47.925068  698628 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:53:47.953552  698628 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:53:47.953641  698628 ssh_runner.go:195] Run: crio --version
	I1101 11:53:47.995847  698628 ssh_runner.go:195] Run: crio --version
	I1101 11:53:48.049719  698628 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:53:48.053152  698628 cli_runner.go:164] Run: docker network inspect pause-482771 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:53:48.077471  698628 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 11:53:48.086057  698628 kubeadm.go:884] updating cluster {Name:pause-482771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-482771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:53:48.086226  698628 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:53:48.086291  698628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:53:48.138698  698628 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:53:48.138722  698628 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:53:48.138783  698628 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:53:48.168260  698628 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:53:48.168280  698628 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:53:48.168288  698628 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 11:53:48.168385  698628 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-482771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-482771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:53:48.168462  698628 ssh_runner.go:195] Run: crio config
	I1101 11:53:48.243310  698628 cni.go:84] Creating CNI manager for ""
	I1101 11:53:48.243378  698628 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:53:48.243415  698628 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:53:48.243476  698628 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-482771 NodeName:pause-482771 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:53:48.243665  698628 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-482771"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:53:48.243761  698628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:53:48.262233  698628 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:53:48.262326  698628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:53:48.274479  698628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 11:53:48.291862  698628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:53:48.307112  698628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 11:53:48.322802  698628 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:53:48.331472  698628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:48.594328  698628 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:53:48.623005  698628 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771 for IP: 192.168.76.2
	I1101 11:53:48.623040  698628 certs.go:195] generating shared ca certs ...
	I1101 11:53:48.623057  698628 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:48.623191  698628 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:53:48.623248  698628 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:53:48.623260  698628 certs.go:257] generating profile certs ...
	I1101 11:53:48.623343  698628 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.key
	I1101 11:53:48.623408  698628 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/apiserver.key.cb01ebdb
	I1101 11:53:48.623459  698628 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/proxy-client.key
	I1101 11:53:48.623573  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:53:48.623606  698628 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:53:48.623617  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:53:48.623640  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:53:48.623671  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:53:48.623696  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:53:48.623743  698628 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:53:48.624320  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:53:48.653380  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:53:48.683249  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:53:48.713866  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:53:48.746297  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 11:53:48.821274  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:53:48.907339  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:53:49.052831  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:53:49.174152  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:53:49.225049  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:53:49.295585  698628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:53:49.368949  698628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:53:49.423496  698628 ssh_runner.go:195] Run: openssl version
	I1101 11:53:49.456112  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:53:49.490253  698628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:53:49.497784  698628 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:53:49.497878  698628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:53:49.653937  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:53:49.692763  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:53:49.712554  698628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:49.736882  698628 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:49.736963  698628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:49.872962  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:53:49.908076  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:53:49.932141  698628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:53:49.942959  698628 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:53:49.943041  698628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:53:50.038938  698628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:53:50.058060  698628 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:53:50.070097  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:53:50.198985  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:53:50.326877  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:53:50.436554  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:53:50.573489  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:53:50.684229  698628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:53:50.808718  698628 kubeadm.go:401] StartCluster: {Name:pause-482771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-482771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:53:50.808829  698628 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:53:50.808905  698628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:53:50.919552  698628 cri.go:89] found id: "a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b"
	I1101 11:53:50.919575  698628 cri.go:89] found id: "d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9"
	I1101 11:53:50.919580  698628 cri.go:89] found id: "aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0"
	I1101 11:53:50.919584  698628 cri.go:89] found id: "caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a"
	I1101 11:53:50.919587  698628 cri.go:89] found id: "e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835"
	I1101 11:53:50.919591  698628 cri.go:89] found id: "d27b59c7134b6455fef2bb0926a50a60c57f67a22cb6dda3ce22c06d5c2e597a"
	I1101 11:53:50.919603  698628 cri.go:89] found id: "0c423f58739ae3c1d8fdc82a3358ed864a553ef4089d3cffa5080a5c59f84fa7"
	I1101 11:53:50.919606  698628 cri.go:89] found id: "9f67935511fe85dccad10f4bacd987b015a57f5e84c1a9bf33d2c3f228c42bee"
	I1101 11:53:50.919609  698628 cri.go:89] found id: "16914a0c9df1d75f9a1e62945a4fa0498edf458829970c78db0f7e6f3c6a9512"
	I1101 11:53:50.919616  698628 cri.go:89] found id: "dc23516676917d642b6c16ed300d1e45e346ba79c17785272f14488c5247ba27"
	I1101 11:53:50.919619  698628 cri.go:89] found id: "5a32cc49f9c585fd9a10fe9e5020e8f2d59dd62e9171d6be93f489bd161d5f0a"
	I1101 11:53:50.919622  698628 cri.go:89] found id: "91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856"
	I1101 11:53:50.919625  698628 cri.go:89] found id: "3b9b4a780447f349b49e46daf0010a349f42158f3b1e36e3eeba375f8c1a4b25"
	I1101 11:53:50.919628  698628 cri.go:89] found id: "727279a73ea7c29fdf4409bd58a498ff5bc8b4b7e350cc84e50148bf0271ad3d"
	I1101 11:53:50.919631  698628 cri.go:89] found id: ""
	I1101 11:53:50.919681  698628 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 11:53:50.967790  698628 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:53:50Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:53:50.967863  698628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:53:50.987304  698628 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:53:50.987327  698628 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:53:50.987397  698628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:53:51.004707  698628 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:53:51.005471  698628 kubeconfig.go:125] found "pause-482771" server: "https://192.168.76.2:8443"
	I1101 11:53:51.006528  698628 kapi.go:59] client config for pause-482771: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:53:51.007067  698628 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 11:53:51.007079  698628 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 11:53:51.007085  698628 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 11:53:51.007090  698628 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 11:53:51.007095  698628 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 11:53:51.008594  698628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:53:51.029184  698628 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 11:53:51.029290  698628 kubeadm.go:602] duration metric: took 41.952038ms to restartPrimaryControlPlane
	I1101 11:53:51.029314  698628 kubeadm.go:403] duration metric: took 220.605738ms to StartCluster
	I1101 11:53:51.029343  698628 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:51.029446  698628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:53:51.030543  698628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:51.030849  698628 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:53:51.031483  698628 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:53:51.031614  698628 config.go:182] Loaded profile config "pause-482771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:53:51.035156  698628 out.go:179] * Enabled addons: 
	I1101 11:53:51.035276  698628 out.go:179] * Verifying Kubernetes components...
	I1101 11:53:47.605838  699286 out.go:252] * Updating the running docker "kubernetes-upgrade-396779" container ...
	I1101 11:53:47.605872  699286 machine.go:94] provisionDockerMachine start ...
	I1101 11:53:47.605962  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:47.632876  699286 main.go:143] libmachine: Using SSH client type: native
	I1101 11:53:47.633198  699286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I1101 11:53:47.633207  699286 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:53:47.805206  699286 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-396779
	
	I1101 11:53:47.805277  699286 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-396779"
	I1101 11:53:47.805363  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:47.825536  699286 main.go:143] libmachine: Using SSH client type: native
	I1101 11:53:47.825889  699286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I1101 11:53:47.825906  699286 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-396779 && echo "kubernetes-upgrade-396779" | sudo tee /etc/hostname
	I1101 11:53:47.996775  699286 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-396779
	
	I1101 11:53:47.996858  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:48.023107  699286 main.go:143] libmachine: Using SSH client type: native
	I1101 11:53:48.026532  699286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I1101 11:53:48.026574  699286 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-396779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-396779/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-396779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:53:48.202095  699286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:53:48.202124  699286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:53:48.202151  699286 ubuntu.go:190] setting up certificates
	I1101 11:53:48.202166  699286 provision.go:84] configureAuth start
	I1101 11:53:48.202230  699286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-396779
	I1101 11:53:48.238215  699286 provision.go:143] copyHostCerts
	I1101 11:53:48.238303  699286 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:53:48.238325  699286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:53:48.238429  699286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:53:48.238572  699286 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:53:48.238586  699286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:53:48.238635  699286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:53:48.238767  699286 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:53:48.238782  699286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:53:48.238820  699286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:53:48.238885  699286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-396779 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-396779 localhost minikube]
	I1101 11:53:48.451571  699286 provision.go:177] copyRemoteCerts
	I1101 11:53:48.451709  699286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:53:48.451808  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:48.492077  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:48.667400  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:53:48.722448  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 11:53:48.761112  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 11:53:48.800081  699286 provision.go:87] duration metric: took 597.889442ms to configureAuth
	I1101 11:53:48.800111  699286 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:53:48.800336  699286 config.go:182] Loaded profile config "kubernetes-upgrade-396779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:53:48.800477  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:48.827614  699286 main.go:143] libmachine: Using SSH client type: native
	I1101 11:53:48.827915  699286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I1101 11:53:48.827929  699286 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:53:49.841117  699286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:53:49.841134  699286 machine.go:97] duration metric: took 2.23525355s to provisionDockerMachine
	I1101 11:53:49.841145  699286 start.go:293] postStartSetup for "kubernetes-upgrade-396779" (driver="docker")
	I1101 11:53:49.841155  699286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:53:49.841236  699286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:53:49.841280  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:49.871830  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:50.003067  699286 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:53:50.014576  699286 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:53:50.014605  699286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:53:50.014618  699286 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:53:50.014702  699286 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:53:50.014786  699286 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:53:50.014955  699286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:53:50.032583  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:53:50.071825  699286 start.go:296] duration metric: took 230.664665ms for postStartSetup
	I1101 11:53:50.071982  699286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:53:50.072060  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:50.104351  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:50.227557  699286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:53:50.234575  699286 fix.go:56] duration metric: took 2.662826277s for fixHost
	I1101 11:53:50.234597  699286 start.go:83] releasing machines lock for "kubernetes-upgrade-396779", held for 2.662881925s
	I1101 11:53:50.234669  699286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-396779
	I1101 11:53:50.260587  699286 ssh_runner.go:195] Run: cat /version.json
	I1101 11:53:50.260638  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:50.260902  699286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:53:50.260950  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:50.296930  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:50.297954  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:50.708886  699286 ssh_runner.go:195] Run: systemctl --version
	I1101 11:53:50.752697  699286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:53:50.938124  699286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:53:50.948667  699286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:53:50.948748  699286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:53:50.974122  699286 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 11:53:50.974143  699286 start.go:496] detecting cgroup driver to use...
	I1101 11:53:50.974175  699286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:53:50.974222  699286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:53:51.001721  699286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:53:51.043854  699286 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:53:51.043965  699286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:53:51.087194  699286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:53:51.120272  699286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:53:51.487393  699286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:53:51.911030  699286 docker.go:234] disabling docker service ...
	I1101 11:53:51.911152  699286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:53:51.957240  699286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:53:51.986244  699286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:53:51.038168  698628 addons.go:515] duration metric: took 6.684894ms for enable addons: enabled=[]
	I1101 11:53:51.038309  698628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:51.493310  698628 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:53:51.529795  698628 node_ready.go:35] waiting up to 6m0s for node "pause-482771" to be "Ready" ...
	I1101 11:53:52.375169  699286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:53:52.766091  699286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:53:52.811924  699286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:53:52.884129  699286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:53:52.884244  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:52.914127  699286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:53:52.914276  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:52.942287  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:52.971705  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:53.002408  699286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:53:53.037530  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:53.080268  699286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:53.118101  699286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:53:53.140806  699286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:53:53.162118  699286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:53:53.179050  699286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:53.535056  699286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:53:53.864329  699286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:53:53.864447  699286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:53:53.868832  699286 start.go:564] Will wait 60s for crictl version
	I1101 11:53:53.868967  699286 ssh_runner.go:195] Run: which crictl
	I1101 11:53:53.872954  699286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:53:53.908858  699286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:53:53.909014  699286 ssh_runner.go:195] Run: crio --version
	I1101 11:53:53.947639  699286 ssh_runner.go:195] Run: crio --version
	I1101 11:53:54.003570  699286 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:53:54.007384  699286 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-396779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:53:54.030476  699286 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 11:53:54.034611  699286 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-396779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:53:54.034719  699286 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:53:54.034776  699286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:53:54.104951  699286 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:53:54.104970  699286 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:53:54.105026  699286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:53:54.161307  699286 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:53:54.161331  699286 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:53:54.161340  699286 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 11:53:54.161449  699286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-396779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:53:54.161543  699286 ssh_runner.go:195] Run: crio config
	I1101 11:53:54.298824  699286 cni.go:84] Creating CNI manager for ""
	I1101 11:53:54.298847  699286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:53:54.298896  699286 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:53:54.298927  699286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-396779 NodeName:kubernetes-upgrade-396779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:53:54.299108  699286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-396779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:53:54.299212  699286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:53:54.319647  699286 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:53:54.319736  699286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:53:54.334217  699286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1101 11:53:54.349630  699286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:53:54.363950  699286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1101 11:53:54.396853  699286 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:53:54.408644  699286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:54.683473  699286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:53:54.707092  699286 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779 for IP: 192.168.85.2
	I1101 11:53:54.707114  699286 certs.go:195] generating shared ca certs ...
	I1101 11:53:54.707130  699286 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:54.707341  699286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:53:54.707412  699286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:53:54.707427  699286 certs.go:257] generating profile certs ...
	I1101 11:53:54.707547  699286 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.key
	I1101 11:53:54.707613  699286 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/apiserver.key.890ff25d
	I1101 11:53:54.707675  699286 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/proxy-client.key
	I1101 11:53:54.707835  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:53:54.707893  699286 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:53:54.707909  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:53:54.707950  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:53:54.707998  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:53:54.708035  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:53:54.708099  699286 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:53:54.708737  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:53:54.752345  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:53:54.797446  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:53:54.827500  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:53:54.864576  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 11:53:54.890940  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:53:54.921418  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:53:54.954891  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:53:54.987464  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:53:55.026445  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:53:55.058531  699286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:53:55.091663  699286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:53:55.117687  699286 ssh_runner.go:195] Run: openssl version
	I1101 11:53:55.126330  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:53:55.139053  699286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:55.144023  699286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:55.144124  699286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:53:55.207788  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:53:55.216510  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:53:55.231108  699286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:53:55.237212  699286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:53:55.237309  699286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:53:55.292423  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:53:55.301296  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:53:55.310807  699286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:53:55.314926  699286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:53:55.315019  699286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:53:55.360384  699286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:53:55.369480  699286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:53:55.373485  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:53:55.418561  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:53:55.460153  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:53:55.516180  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:53:55.562646  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:53:55.608005  699286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:53:55.652402  699286 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-396779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-396779 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:53:55.652480  699286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:53:55.652579  699286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:53:55.693104  699286 cri.go:89] found id: "ba303fe218d2dfe039337cc65b8665e6208fcb97a0bfa6bf2fe8fb6efdaba7f0"
	I1101 11:53:55.693127  699286 cri.go:89] found id: "0dad010db6c2f9d17e6850c7aea098c9d98ddc616227a8bb6390ae9e6b2ccac0"
	I1101 11:53:55.693131  699286 cri.go:89] found id: "da3db3ef3f554bbb32ae8828dbacc4cf249c61034dc04a4d1738b5c3225e9dff"
	I1101 11:53:55.693136  699286 cri.go:89] found id: "8ff009219f7e8d56a017921144b72ef1c95e24ea074786a4942d2a0354251638"
	I1101 11:53:55.693139  699286 cri.go:89] found id: ""
	I1101 11:53:55.693218  699286 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 11:53:55.704566  699286 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:53:55Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:53:55.704681  699286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:53:55.712944  699286 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:53:55.712964  699286 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:53:55.713043  699286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:53:55.722293  699286 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:53:55.723007  699286 kubeconfig.go:125] found "kubernetes-upgrade-396779" server: "https://192.168.85.2:8443"
	I1101 11:53:55.739896  699286 kapi.go:59] client config for kubernetes-upgrade-396779: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:53:55.740427  699286 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 11:53:55.740441  699286 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 11:53:55.740446  699286 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 11:53:55.740451  699286 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 11:53:55.740455  699286 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 11:53:55.740739  699286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:53:55.752954  699286 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 11:53:55.752985  699286 kubeadm.go:602] duration metric: took 40.015406ms to restartPrimaryControlPlane
	I1101 11:53:55.752993  699286 kubeadm.go:403] duration metric: took 100.600349ms to StartCluster
	I1101 11:53:55.753008  699286 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:55.753068  699286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:53:55.754098  699286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:53:55.754334  699286 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:53:55.754563  699286 config.go:182] Loaded profile config "kubernetes-upgrade-396779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:53:55.754627  699286 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:53:55.754705  699286 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-396779"
	I1101 11:53:55.754724  699286 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-396779"
	W1101 11:53:55.754737  699286 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:53:55.754759  699286 host.go:66] Checking if "kubernetes-upgrade-396779" exists ...
	I1101 11:53:55.755249  699286 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-396779 --format={{.State.Status}}
	I1101 11:53:55.755717  699286 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-396779"
	I1101 11:53:55.755741  699286 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-396779"
	I1101 11:53:55.756065  699286 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-396779 --format={{.State.Status}}
	I1101 11:53:55.763743  699286 out.go:179] * Verifying Kubernetes components...
	I1101 11:53:55.769805  699286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:53:55.798235  699286 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:53:55.799794  699286 kapi.go:59] client config for kubernetes-upgrade-396779: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kubernetes-upgrade-396779/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:53:55.800092  699286 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-396779"
	W1101 11:53:55.800104  699286 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:53:55.800128  699286 host.go:66] Checking if "kubernetes-upgrade-396779" exists ...
	I1101 11:53:55.800529  699286 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-396779 --format={{.State.Status}}
	I1101 11:53:55.802934  699286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:53:55.802960  699286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:53:55.803024  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:55.837963  699286 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:55.837982  699286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:53:55.838049  699286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-396779
	I1101 11:53:55.845965  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:55.875313  699286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/kubernetes-upgrade-396779/id_rsa Username:docker}
	I1101 11:53:56.051530  699286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:53:56.052794  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:53:56.075212  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:56.130552  699286 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:53:56.130698  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:56.270763  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.270815  699286 retry.go:31] will retry after 254.045518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 11:53:56.270872  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.270884  699286 retry.go:31] will retry after 140.865867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.412272  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:56.525856  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:53:56.535204  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.535236  699286 retry.go:31] will retry after 349.443428ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.630875  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:56.634920  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.635001  699286 retry.go:31] will retry after 488.85168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.884927  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 11:53:56.974750  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.974792  699286 retry.go:31] will retry after 451.681607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.124078  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:53:57.131533  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:57.205942  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.206057  699286 retry.go:31] will retry after 431.089549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:56.372200  698628 node_ready.go:49] node "pause-482771" is "Ready"
	I1101 11:53:56.372226  698628 node_ready.go:38] duration metric: took 4.842401899s for node "pause-482771" to be "Ready" ...
	I1101 11:53:56.372240  698628 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:53:56.372300  698628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:53:56.392370  698628 api_server.go:72] duration metric: took 5.36145211s to wait for apiserver process to appear ...
	I1101 11:53:56.392392  698628 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:53:56.392412  698628 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 11:53:56.411380  698628 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:53:56.411413  698628 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:53:56.893026  698628 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 11:53:56.907751  698628 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:53:56.907795  698628 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:53:57.393399  698628 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 11:53:57.402908  698628 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:53:57.402933  698628 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:53:57.892630  698628 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 11:53:57.900842  698628 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 11:53:57.902018  698628 api_server.go:141] control plane version: v1.34.1
	I1101 11:53:57.902046  698628 api_server.go:131] duration metric: took 1.509647262s to wait for apiserver health ...
	I1101 11:53:57.902055  698628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:53:57.905495  698628 system_pods.go:59] 7 kube-system pods found
	I1101 11:53:57.905538  698628 system_pods.go:61] "coredns-66bc5c9577-49sg2" [5b31d9e2-1052-4646-a62f-7adc7c2d045c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:53:57.905548  698628 system_pods.go:61] "etcd-pause-482771" [7c0221c3-61a4-484e-8cfb-08fa19deb2cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:53:57.905553  698628 system_pods.go:61] "kindnet-xscmv" [ddeffdc3-3ed3-40ea-8b90-931a1aee6317] Running
	I1101 11:53:57.905560  698628 system_pods.go:61] "kube-apiserver-pause-482771" [ee664fdb-2fd8-4f99-947c-2885a3f74227] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:53:57.905567  698628 system_pods.go:61] "kube-controller-manager-pause-482771" [be707719-1b56-46cf-827c-481f64c7da47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:53:57.905576  698628 system_pods.go:61] "kube-proxy-c22qb" [d0861096-6955-4968-aa01-324237dd0609] Running
	I1101 11:53:57.905583  698628 system_pods.go:61] "kube-scheduler-pause-482771" [b15c318f-8138-47a0-94d3-d8a0a6b7fad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:53:57.905597  698628 system_pods.go:74] duration metric: took 3.535684ms to wait for pod list to return data ...
	I1101 11:53:57.905608  698628 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:53:57.908128  698628 default_sa.go:45] found service account: "default"
	I1101 11:53:57.908159  698628 default_sa.go:55] duration metric: took 2.545214ms for default service account to be created ...
	I1101 11:53:57.908169  698628 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:53:57.910995  698628 system_pods.go:86] 7 kube-system pods found
	I1101 11:53:57.911028  698628 system_pods.go:89] "coredns-66bc5c9577-49sg2" [5b31d9e2-1052-4646-a62f-7adc7c2d045c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:53:57.911038  698628 system_pods.go:89] "etcd-pause-482771" [7c0221c3-61a4-484e-8cfb-08fa19deb2cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:53:57.911085  698628 system_pods.go:89] "kindnet-xscmv" [ddeffdc3-3ed3-40ea-8b90-931a1aee6317] Running
	I1101 11:53:57.911093  698628 system_pods.go:89] "kube-apiserver-pause-482771" [ee664fdb-2fd8-4f99-947c-2885a3f74227] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:53:57.911101  698628 system_pods.go:89] "kube-controller-manager-pause-482771" [be707719-1b56-46cf-827c-481f64c7da47] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:53:57.911111  698628 system_pods.go:89] "kube-proxy-c22qb" [d0861096-6955-4968-aa01-324237dd0609] Running
	I1101 11:53:57.911119  698628 system_pods.go:89] "kube-scheduler-pause-482771" [b15c318f-8138-47a0-94d3-d8a0a6b7fad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:53:57.911144  698628 system_pods.go:126] duration metric: took 2.966742ms to wait for k8s-apps to be running ...
	I1101 11:53:57.911161  698628 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:53:57.911231  698628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:53:57.924318  698628 system_svc.go:56] duration metric: took 13.147935ms WaitForService to wait for kubelet
	I1101 11:53:57.924346  698628 kubeadm.go:587] duration metric: took 6.893434502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:53:57.924376  698628 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:53:57.927529  698628 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:53:57.927564  698628 node_conditions.go:123] node cpu capacity is 2
	I1101 11:53:57.927576  698628 node_conditions.go:105] duration metric: took 3.19497ms to run NodePressure ...
	I1101 11:53:57.927589  698628 start.go:242] waiting for startup goroutines ...
	I1101 11:53:57.927596  698628 start.go:247] waiting for cluster config update ...
	I1101 11:53:57.927604  698628 start.go:256] writing updated cluster config ...
	I1101 11:53:57.927914  698628 ssh_runner.go:195] Run: rm -f paused
	I1101 11:53:57.931665  698628 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:53:57.932332  698628 kapi.go:59] client config for pause-482771: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/profiles/pause-482771/client.key", CAFile:"/home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:53:57.935431  698628 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-49sg2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:53:57.426689  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 11:53:57.515206  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.515302  699286 retry.go:31] will retry after 561.90039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.631284  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:53:57.637793  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:53:57.701072  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:57.701104  699286 retry.go:31] will retry after 795.170356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.077668  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:58.131295  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:58.148817  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.148896  699286 retry.go:31] will retry after 978.703539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.497271  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:53:58.562095  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.562164  699286 retry.go:31] will retry after 1.672012903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:58.631484  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:53:59.128626  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:53:59.131110  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:59.196662  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:59.196688  699286 retry.go:31] will retry after 1.150881874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:53:59.630779  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:00.137137  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:00.236769  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:54:00.349772  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 11:54:00.360806  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:00.360915  699286 retry.go:31] will retry after 1.952791589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 11:54:00.473311  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:00.473427  699286 retry.go:31] will retry after 3.908754672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:00.631795  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:01.131270  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:01.631162  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:02.131687  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:53:59.941744  698628 pod_ready.go:104] pod "coredns-66bc5c9577-49sg2" is not "Ready", error: <nil>
	I1101 11:54:01.441148  698628 pod_ready.go:94] pod "coredns-66bc5c9577-49sg2" is "Ready"
	I1101 11:54:01.441180  698628 pod_ready.go:86] duration metric: took 3.5057231s for pod "coredns-66bc5c9577-49sg2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:01.443538  698628 pod_ready.go:83] waiting for pod "etcd-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.949755  698628 pod_ready.go:94] pod "etcd-pause-482771" is "Ready"
	I1101 11:54:02.949827  698628 pod_ready.go:86] duration metric: took 1.506257198s for pod "etcd-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.952276  698628 pod_ready.go:83] waiting for pod "kube-apiserver-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.956778  698628 pod_ready.go:94] pod "kube-apiserver-pause-482771" is "Ready"
	I1101 11:54:02.956805  698628 pod_ready.go:86] duration metric: took 4.503122ms for pod "kube-apiserver-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.959275  698628 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:02.314769  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:54:02.382919  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:02.382955  699286 retry.go:31] will retry after 3.654222907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:02.631296  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:03.131233  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:03.631073  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:04.131578  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:04.382832  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 11:54:04.512433  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:04.512474  699286 retry.go:31] will retry after 3.209864376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:04.630747  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:05.131134  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:05.630867  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:06.037825  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:54:06.097067  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:06.097102  699286 retry.go:31] will retry after 4.86929524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:06.131403  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:06.631086  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:07.131617  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 11:54:04.967039  698628 pod_ready.go:104] pod "kube-controller-manager-pause-482771" is not "Ready", error: <nil>
	I1101 11:54:06.965373  698628 pod_ready.go:94] pod "kube-controller-manager-pause-482771" is "Ready"
	I1101 11:54:06.965398  698628 pod_ready.go:86] duration metric: took 4.006050681s for pod "kube-controller-manager-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:06.968070  698628 pod_ready.go:83] waiting for pod "kube-proxy-c22qb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:06.974381  698628 pod_ready.go:94] pod "kube-proxy-c22qb" is "Ready"
	I1101 11:54:06.974416  698628 pod_ready.go:86] duration metric: took 6.323462ms for pod "kube-proxy-c22qb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:06.976755  698628 pod_ready.go:83] waiting for pod "kube-scheduler-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:07.238730  698628 pod_ready.go:94] pod "kube-scheduler-pause-482771" is "Ready"
	I1101 11:54:07.238757  698628 pod_ready.go:86] duration metric: took 261.976403ms for pod "kube-scheduler-pause-482771" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:54:07.238769  698628 pod_ready.go:40] duration metric: took 9.307071156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:54:07.297857  698628 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 11:54:07.300878  698628 out.go:179] * Done! kubectl is now configured to use "pause-482771" cluster and "default" namespace by default
	I1101 11:54:07.630788  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:07.722918  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 11:54:07.813602  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:07.813635  699286 retry.go:31] will retry after 5.773134852s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:08.130836  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:08.630885  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:09.130903  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:09.631427  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:10.130805  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:10.630821  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:10.966602  699286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 11:54:11.058902  699286 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:11.058937  699286 retry.go:31] will retry after 4.187542065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 11:54:11.131110  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:11.631703  699286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:54:11.665972  699286 api_server.go:72] duration metric: took 15.911608975s to wait for apiserver process to appear ...
	I1101 11:54:11.665995  699286 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:54:11.666022  699286 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	
	
	==> CRI-O <==
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.250491643Z" level=info msg="Created container aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0: kube-system/coredns-66bc5c9577-49sg2/coredns" id=ebfe642d-0d14-4405-964e-540b80a91087 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.257870995Z" level=info msg="Created container caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a: kube-system/kube-proxy-c22qb/kube-proxy" id=77f91192-ca64-4cae-a890-db6205e11605 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.258436024Z" level=info msg="Started container" PID=2261 containerID=e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835 description=kube-system/kindnet-xscmv/kindnet-cni id=2fc40b6e-d168-4d6f-9adc-f04ee577a10a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b855ba137c91b8167e492b75549d185759d3c89bff7600b2b2ed8d11a93383a1
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.270304221Z" level=info msg="Starting container: aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0" id=9325fa2a-3653-4a37-9191-b66e3e89552b name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.271031197Z" level=info msg="Starting container: caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a" id=033769db-4453-4deb-abe7-d51fa3ba7c70 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.274439191Z" level=info msg="Created container d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9: kube-system/kube-controller-manager-pause-482771/kube-controller-manager" id=ba0fe261-8a81-41da-82fa-c5a540f24abd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.28687954Z" level=info msg="Started container" PID=2270 containerID=caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a description=kube-system/kube-proxy-c22qb/kube-proxy id=033769db-4453-4deb-abe7-d51fa3ba7c70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb48dcff20368cc29326549bafaf66b3dd57f0e9b114f34f4c032ed582df4658
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.294414037Z" level=info msg="Starting container: d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9" id=16a9ac3a-4955-4b56-af17-feacf0a9cbca name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.306127689Z" level=info msg="Started container" PID=2273 containerID=aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0 description=kube-system/coredns-66bc5c9577-49sg2/coredns id=9325fa2a-3653-4a37-9191-b66e3e89552b name=/runtime.v1.RuntimeService/StartContainer sandboxID=c97c1949f2d243cf5e0375dc9328490f6bafbfa7f84fbff9eba24d9944ee59bb
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.316248418Z" level=info msg="Created container a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b: kube-system/etcd-pause-482771/etcd" id=9ccbbaef-0c73-497e-ae22-c4aed8c48190 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.322501823Z" level=info msg="Starting container: a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b" id=abb80776-5dc9-4ced-bc9d-1d5eb2c05103 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.333270716Z" level=info msg="Started container" PID=2284 containerID=d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9 description=kube-system/kube-controller-manager-pause-482771/kube-controller-manager id=16a9ac3a-4955-4b56-af17-feacf0a9cbca name=/runtime.v1.RuntimeService/StartContainer sandboxID=a317857f3407a3091a6247b4a5f8ecbf443ad5a0dbd803fa0b586a8cf39b884f
	Nov 01 11:53:49 pause-482771 crio[2083]: time="2025-11-01T11:53:49.340946663Z" level=info msg="Started container" PID=2319 containerID=a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b description=kube-system/etcd-pause-482771/etcd id=abb80776-5dc9-4ced-bc9d-1d5eb2c05103 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4b9e623bf705e4a9e2c5bd0635aa161ea602bf6d4edba4c72009d4c56462bcb2
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.772952559Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.776518841Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.776555765Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.776577878Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.780035104Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.780074095Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.78009451Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.783343675Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.78337895Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.783401809Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.787455228Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:53:59 pause-482771 crio[2083]: time="2025-11-01T11:53:59.787491708Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a7b6614083cf2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   4b9e623bf705e       etcd-pause-482771                      kube-system
	d239515eba866       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   a317857f3407a       kube-controller-manager-pause-482771   kube-system
	aa9b1fe68a325       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   c97c1949f2d24       coredns-66bc5c9577-49sg2               kube-system
	caefba313f65a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   eb48dcff20368       kube-proxy-c22qb                       kube-system
	e0cc97d39f883       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   b855ba137c91b       kindnet-xscmv                          kube-system
	d27b59c7134b6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   4ab6c741da7b9       kube-apiserver-pause-482771            kube-system
	0c423f58739ae       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   91fcc0aef761b       kube-scheduler-pause-482771            kube-system
	9f67935511fe8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   38 seconds ago       Exited              coredns                   0                   c97c1949f2d24       coredns-66bc5c9577-49sg2               kube-system
	16914a0c9df1d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   eb48dcff20368       kube-proxy-c22qb                       kube-system
	dc23516676917       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   b855ba137c91b       kindnet-xscmv                          kube-system
	5a32cc49f9c58       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   4ab6c741da7b9       kube-apiserver-pause-482771            kube-system
	91d6ed15f3167       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   4b9e623bf705e       etcd-pause-482771                      kube-system
	3b9b4a780447f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   91fcc0aef761b       kube-scheduler-pause-482771            kube-system
	727279a73ea7c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   a317857f3407a       kube-controller-manager-pause-482771   kube-system
	
	
	==> coredns [9f67935511fe85dccad10f4bacd987b015a57f5e84c1a9bf33d2c3f228c42bee] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56615 - 1102 "HINFO IN 221121602160979795.876064981112779147. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010305609s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa9b1fe68a32512ccba2615fe831c82989c78320697c8797cd487eb6d78296d0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51619 - 38278 "HINFO IN 4304995760194375860.86542241889092373. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.021833338s
	
	
	==> describe nodes <==
	Name:               pause-482771
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-482771
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=pause-482771
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_52_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-482771
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:54:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:53:58 +0000   Sat, 01 Nov 2025 11:52:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:53:58 +0000   Sat, 01 Nov 2025 11:52:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:53:58 +0000   Sat, 01 Nov 2025 11:52:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:53:58 +0000   Sat, 01 Nov 2025 11:53:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-482771
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                436bd671-e9ad-45d7-a076-791b029e6c70
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-49sg2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     81s
	  kube-system                 etcd-pause-482771                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         89s
	  kube-system                 kindnet-xscmv                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      82s
	  kube-system                 kube-apiserver-pause-482771             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-482771    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-c22qb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-482771             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 80s   kube-proxy       
	  Normal   Starting                 17s   kube-proxy       
	  Normal   Starting                 87s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 87s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s   kubelet          Node pause-482771 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s   kubelet          Node pause-482771 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     87s   kubelet          Node pause-482771 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           82s   node-controller  Node pause-482771 event: Registered Node pause-482771 in Controller
	  Normal   NodeReady                40s   kubelet          Node pause-482771 status is now: NodeReady
	  Normal   RegisteredNode           15s   node-controller  Node pause-482771 event: Registered Node pause-482771 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:24] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:26] overlayfs: idmapped layers are currently not supported
	[  +2.957169] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:27] overlayfs: idmapped layers are currently not supported
	[ +46.322577] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:34] overlayfs: idmapped layers are currently not supported
	[ +35.784283] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856] <==
	{"level":"warn","ts":"2025-11-01T11:52:42.980351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.025017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.081635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.142556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.169892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.199409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:52:43.337751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34776","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T11:53:40.033759Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T11:53:40.033815Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-482771","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-01T11:53:40.057175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T11:53:40.220874Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T11:53:40.221020Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:53:40.221069Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-01T11:53:40.221215Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T11:53:40.221266Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T11:53:40.221515Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T11:53:40.221571Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T11:53:40.221606Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T11:53:40.221709Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T11:53:40.221747Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T11:53:40.221782Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:53:40.224475Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-01T11:53:40.224596Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:53:40.224648Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-01T11:53:40.224700Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-482771","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [a7b6614083cf2f233cc0dd489c44d9cb385ec5569ddb19d4551602330ae9ca5b] <==
	{"level":"warn","ts":"2025-11-01T11:53:54.242399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.291776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.348162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.389099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.441805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.478584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.538418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.576502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.592022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.607342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.631275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.666179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.718576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.719029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.744492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.766050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.781137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.806742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.822571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.844974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.867388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.908286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.937570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:54.969574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:53:55.114978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44822","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:54:14 up  3:36,  0 user,  load average: 4.37, 2.85, 2.30
	Linux pause-482771 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dc23516676917d642b6c16ed300d1e45e346ba79c17785272f14488c5247ba27] <==
	I1101 11:52:53.820195       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 11:52:53.820615       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 11:52:53.820829       1 main.go:148] setting mtu 1500 for CNI 
	I1101 11:52:53.820879       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 11:52:53.820919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T11:52:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 11:52:54.022632       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 11:52:54.022708       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 11:52:54.022745       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 11:52:54.023875       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 11:53:24.022799       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 11:53:24.024037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 11:53:24.024041       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 11:53:24.024212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 11:53:25.123637       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 11:53:25.123738       1 metrics.go:72] Registering metrics
	I1101 11:53:25.123866       1 controller.go:711] "Syncing nftables rules"
	I1101 11:53:34.026161       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:53:34.026273       1 main.go:301] handling current node
	
	
	==> kindnet [e0cc97d39f88365805bf21bef41fd8bc571a28b9d50724f15d19c0e23d5e0835] <==
	I1101 11:53:49.421424       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 11:53:49.421819       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 11:53:49.421969       1 main.go:148] setting mtu 1500 for CNI 
	I1101 11:53:49.421992       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 11:53:49.422009       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T11:53:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 11:53:49.772087       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 11:53:49.772171       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 11:53:49.772207       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 11:53:49.777482       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 11:53:56.476597       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 11:53:56.476731       1 metrics.go:72] Registering metrics
	I1101 11:53:56.476829       1 controller.go:711] "Syncing nftables rules"
	I1101 11:53:59.772006       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:53:59.772069       1 main.go:301] handling current node
	I1101 11:54:09.772767       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:54:09.772835       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a32cc49f9c585fd9a10fe9e5020e8f2d59dd62e9171d6be93f489bd161d5f0a] <==
	W1101 11:53:40.075738       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075780       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075741       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075631       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075514       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075863       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075895       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075929       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075960       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.075992       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076025       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076053       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076081       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076329       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.076359       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077220       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077289       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077323       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077354       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077385       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077416       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077445       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.077479       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 11:53:40.074804       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d27b59c7134b6455fef2bb0926a50a60c57f67a22cb6dda3ce22c06d5c2e597a] <==
	I1101 11:53:56.385219       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 11:53:56.385287       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 11:53:56.391007       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 11:53:56.391071       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 11:53:56.392654       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 11:53:56.392688       1 aggregator.go:171] initial CRD sync complete...
	I1101 11:53:56.392696       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 11:53:56.392702       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 11:53:56.392706       1 cache.go:39] Caches are synced for autoregister controller
	I1101 11:53:56.392841       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 11:53:56.435798       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:53:56.450725       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 11:53:56.464647       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 11:53:56.466542       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 11:53:56.466609       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 11:53:56.474035       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 11:53:56.492410       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 11:53:56.492499       1 policy_source.go:240] refreshing policies
	I1101 11:53:56.513836       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 11:53:57.164786       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 11:53:58.399580       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 11:53:59.868402       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 11:53:59.969893       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 11:54:00.096713       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 11:54:00.189159       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [727279a73ea7c29fdf4409bd58a498ff5bc8b4b7e350cc84e50148bf0271ad3d] <==
	I1101 11:52:52.277786       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 11:52:52.284852       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 11:52:52.285741       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 11:52:52.287059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 11:52:52.298915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:52:52.298937       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 11:52:52.298944       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 11:52:52.309954       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:52:52.319866       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 11:52:52.319918       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 11:52:52.320319       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 11:52:52.321499       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 11:52:52.321744       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 11:52:52.321901       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 11:52:52.321941       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 11:52:52.322024       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 11:52:52.322095       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 11:52:52.323268       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 11:52:52.323361       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 11:52:52.323557       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 11:52:52.324797       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 11:52:52.324828       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 11:52:52.325949       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 11:52:52.332402       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 11:53:37.281950       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [d239515eba866c5064c4360de5f5614adc0f51ad6036dc7ef78b79535d53fdc9] <==
	I1101 11:53:59.670435       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 11:53:59.671999       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 11:53:59.672247       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 11:53:59.673805       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 11:53:59.673855       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 11:53:59.673872       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 11:53:59.675336       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 11:53:59.677810       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 11:53:59.679374       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 11:53:59.680494       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 11:53:59.682167       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 11:53:59.683588       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 11:53:59.686059       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 11:53:59.686106       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 11:53:59.691734       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 11:53:59.693922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 11:53:59.711464       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 11:53:59.712202       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 11:53:59.712249       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 11:53:59.712320       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:53:59.712384       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 11:53:59.712415       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 11:53:59.712813       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 11:53:59.712845       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 11:53:59.779567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [16914a0c9df1d75f9a1e62945a4fa0498edf458829970c78db0f7e6f3c6a9512] <==
	I1101 11:52:53.767970       1 server_linux.go:53] "Using iptables proxy"
	I1101 11:52:53.859552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:52:53.960180       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:52:53.960251       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 11:52:53.960323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:52:53.988378       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:52:53.988436       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:52:53.992468       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:52:53.992765       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:52:53.992787       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:52:53.994698       1 config.go:200] "Starting service config controller"
	I1101 11:52:53.994782       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:52:53.994826       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:52:53.994854       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:52:53.995049       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:52:53.995088       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:52:53.997496       1 config.go:309] "Starting node config controller"
	I1101 11:52:53.997568       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:52:53.997604       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:52:54.095541       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:52:54.095551       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 11:52:54.095493       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [caefba313f65afc61d156bc5fe215b355440af9f01ec06ac98da79590bc42c0a] <==
	I1101 11:53:55.328316       1 server_linux.go:53] "Using iptables proxy"
	I1101 11:53:55.919517       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:53:56.505901       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:53:56.505974       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 11:53:56.506046       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:53:56.538730       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:53:56.538785       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:53:56.553867       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:53:56.554277       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:53:56.554335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:53:56.555723       1 config.go:200] "Starting service config controller"
	I1101 11:53:56.555803       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:53:56.555851       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:53:56.560325       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:53:56.556060       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:53:56.560364       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:53:56.556712       1 config.go:309] "Starting node config controller"
	I1101 11:53:56.560374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:53:56.560379       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:53:56.657672       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:53:56.660499       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:53:56.660541       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c423f58739ae3c1d8fdc82a3358ed864a553ef4089d3cffa5080a5c59f84fa7] <==
	I1101 11:53:52.795394       1 serving.go:386] Generated self-signed cert in-memory
	W1101 11:53:56.343595       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 11:53:56.343638       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 11:53:56.343649       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 11:53:56.343657       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 11:53:56.407877       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 11:53:56.407921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:53:56.420275       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 11:53:56.420363       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 11:53:56.420368       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:53:56.420441       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:53:56.522063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [3b9b4a780447f349b49e46daf0010a349f42158f3b1e36e3eeba375f8c1a4b25] <==
	E1101 11:52:45.720785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 11:52:45.724959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 11:52:45.726938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 11:52:45.726991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 11:52:45.727032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 11:52:45.727071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 11:52:45.727115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 11:52:45.727197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 11:52:45.727323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 11:52:45.727413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 11:52:45.727449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 11:52:45.727537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 11:52:45.727578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 11:52:45.727650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 11:52:45.727701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 11:52:45.727975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 11:52:45.728070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 11:52:45.729092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1101 11:52:46.896190       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:53:40.026243       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 11:53:40.026372       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 11:53:40.026390       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 11:53:40.026411       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:53:40.026636       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 11:53:40.026653       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.990152    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-482771\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ce7ce70e8d7b65ff886873cce8964842" pod="kube-system/kube-apiserver-pause-482771"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.990381    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-482771\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="81101c32cb35feaeaf80f1278d075e53" pod="kube-system/kube-controller-manager-pause-482771"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.990726    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-xscmv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ddeffdc3-3ed3-40ea-8b90-931a1aee6317" pod="kube-system/kindnet-xscmv"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.991227    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c22qb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d0861096-6955-4968-aa01-324237dd0609" pod="kube-system/kube-proxy-c22qb"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: I1101 11:53:48.991478    1313 scope.go:117] "RemoveContainer" containerID="91d6ed15f3167e17a9859ff386b63b4e59a15ce12e98cc0f123d921d6ca28856"
	Nov 01 11:53:48 pause-482771 kubelet[1313]: E1101 11:53:48.992523    1313 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-49sg2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5b31d9e2-1052-4646-a62f-7adc7c2d045c" pod="kube-system/coredns-66bc5c9577-49sg2"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.097938    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.098356    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.098656    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.099196    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.099611    1313 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: I1101 11:53:49.099741    1313 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Nov 01 11:53:49 pause-482771 kubelet[1313]: E1101 11:53:49.100096    1313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-482771?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="200ms"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.275069    1313 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-482771\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.276055    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-482771\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="ce7ce70e8d7b65ff886873cce8964842" pod="kube-system/kube-apiserver-pause-482771"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.277800    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-482771\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="81101c32cb35feaeaf80f1278d075e53" pod="kube-system/kube-controller-manager-pause-482771"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.334770    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-xscmv\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="ddeffdc3-3ed3-40ea-8b90-931a1aee6317" pod="kube-system/kindnet-xscmv"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.356794    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-c22qb\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="d0861096-6955-4968-aa01-324237dd0609" pod="kube-system/kube-proxy-c22qb"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.370780    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-49sg2\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="5b31d9e2-1052-4646-a62f-7adc7c2d045c" pod="kube-system/coredns-66bc5c9577-49sg2"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.379645    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-482771\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="fad7ceeb4086c6bce1e6a1f1f2d84a76" pod="kube-system/etcd-pause-482771"
	Nov 01 11:53:56 pause-482771 kubelet[1313]: E1101 11:53:56.399420    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-482771\" is forbidden: User \"system:node:pause-482771\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-482771' and this object" podUID="05f2cfa65e3ab911cf76bf0e3596338d" pod="kube-system/kube-scheduler-pause-482771"
	Nov 01 11:53:57 pause-482771 kubelet[1313]: W1101 11:53:57.732638    1313 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 01 11:54:07 pause-482771 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 11:54:07 pause-482771 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 11:54:07 pause-482771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-482771 -n pause-482771
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-482771 -n pause-482771: exit status 2 (538.912138ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-482771 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (275.765546ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:57:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-952358 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-952358 describe deploy/metrics-server -n kube-system: exit status 1 (93.522206ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-952358 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-952358
helpers_test.go:243: (dbg) docker inspect old-k8s-version-952358:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431",
	        "Created": "2025-11-01T11:56:05.046595205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 714174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:56:05.130618448Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/hostname",
	        "HostsPath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/hosts",
	        "LogPath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431-json.log",
	        "Name": "/old-k8s-version-952358",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-952358:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-952358",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431",
	                "LowerDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-952358",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-952358/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-952358",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-952358",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-952358",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f453540d68229880069d24e8ee230df86086cc2f229c23a6ba5cb7db0108f7ce",
	            "SandboxKey": "/var/run/docker/netns/f453540d6822",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33775"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33776"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33779"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33777"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-952358": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:56:29:85:7c:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c9bca57e57ae79fd54c9c7ebc4412107912a1f60b0190f08a0287f153c5cacff",
	                    "EndpointID": "7aef32589aa21c46c2c802a7c0a7c48ddd472a8edfa4bf3ef35be3953de907d8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-952358",
	                        "5af3c19b6c57"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-952358 -n old-k8s-version-952358
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-952358 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-952358 logs -n 25: (1.233426015s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-507511 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo containerd config dump                                                                                                                                                                                                  │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo crio config                                                                                                                                                                                                             │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ delete  │ -p cilium-507511                                                                                                                                                                                                                              │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p force-systemd-env-857548 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-857548  │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ force-systemd-flag-643844 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-643844 │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ delete  │ -p force-systemd-flag-643844                                                                                                                                                                                                                  │ force-systemd-flag-643844 │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-534694    │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p force-systemd-env-857548                                                                                                                                                                                                                   │ force-systemd-env-857548  │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p cert-options-505831 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ cert-options-505831 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ -p cert-options-505831 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p cert-options-505831                                                                                                                                                                                                                        │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:55:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:55:58.476403  713774 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:55:58.476578  713774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:55:58.476590  713774 out.go:374] Setting ErrFile to fd 2...
	I1101 11:55:58.476595  713774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:55:58.476874  713774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:55:58.477394  713774 out.go:368] Setting JSON to false
	I1101 11:55:58.478356  713774 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13108,"bootTime":1761985051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:55:58.478425  713774 start.go:143] virtualization:  
	I1101 11:55:58.482016  713774 out.go:179] * [old-k8s-version-952358] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:55:58.486559  713774 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:55:58.486702  713774 notify.go:221] Checking for updates...
	I1101 11:55:58.493270  713774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:55:58.496577  713774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:55:58.499890  713774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:55:58.503141  713774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:55:58.506333  713774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:55:58.510158  713774 config.go:182] Loaded profile config "cert-expiration-534694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:55:58.510308  713774 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:55:58.542093  713774 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:55:58.542229  713774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:55:58.601646  713774 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:55:58.590341754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:55:58.601941  713774 docker.go:319] overlay module found
	I1101 11:55:58.612358  713774 out.go:179] * Using the docker driver based on user configuration
	I1101 11:55:58.615352  713774 start.go:309] selected driver: docker
	I1101 11:55:58.615381  713774 start.go:930] validating driver "docker" against <nil>
	I1101 11:55:58.615396  713774 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:55:58.616243  713774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:55:58.675200  713774 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:55:58.665275174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:55:58.675365  713774 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 11:55:58.675609  713774 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:55:58.678639  713774 out.go:179] * Using Docker driver with root privileges
	I1101 11:55:58.681527  713774 cni.go:84] Creating CNI manager for ""
	I1101 11:55:58.681594  713774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:55:58.681607  713774 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 11:55:58.681783  713774 start.go:353] cluster config:
	{Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:55:58.686779  713774 out.go:179] * Starting "old-k8s-version-952358" primary control-plane node in "old-k8s-version-952358" cluster
	I1101 11:55:58.689583  713774 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:55:58.692439  713774 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:55:58.695245  713774 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 11:55:58.695316  713774 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 11:55:58.695329  713774 cache.go:59] Caching tarball of preloaded images
	I1101 11:55:58.695330  713774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:55:58.695585  713774 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:55:58.695620  713774 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 11:55:58.695739  713774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/config.json ...
	I1101 11:55:58.695767  713774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/config.json: {Name:mkc2882c11742f01d125f83c57cd3dafe6241b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:55:58.714649  713774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:55:58.714675  713774 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:55:58.714689  713774 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:55:58.714710  713774 start.go:360] acquireMachinesLock for old-k8s-version-952358: {Name:mk5b8de3b8dc99aca4b3c9de9389ab7eb20d4d78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:55:58.714820  713774 start.go:364] duration metric: took 89.281µs to acquireMachinesLock for "old-k8s-version-952358"
	I1101 11:55:58.714849  713774 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:55:58.714925  713774 start.go:125] createHost starting for "" (driver="docker")
	I1101 11:55:58.718306  713774 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 11:55:58.718531  713774 start.go:159] libmachine.API.Create for "old-k8s-version-952358" (driver="docker")
	I1101 11:55:58.718572  713774 client.go:173] LocalClient.Create starting
	I1101 11:55:58.718657  713774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 11:55:58.718692  713774 main.go:143] libmachine: Decoding PEM data...
	I1101 11:55:58.718709  713774 main.go:143] libmachine: Parsing certificate...
	I1101 11:55:58.718769  713774 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 11:55:58.718795  713774 main.go:143] libmachine: Decoding PEM data...
	I1101 11:55:58.718809  713774 main.go:143] libmachine: Parsing certificate...
	I1101 11:55:58.719194  713774 cli_runner.go:164] Run: docker network inspect old-k8s-version-952358 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 11:55:58.735719  713774 cli_runner.go:211] docker network inspect old-k8s-version-952358 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 11:55:58.735802  713774 network_create.go:284] running [docker network inspect old-k8s-version-952358] to gather additional debugging logs...
	I1101 11:55:58.735823  713774 cli_runner.go:164] Run: docker network inspect old-k8s-version-952358
	W1101 11:55:58.755198  713774 cli_runner.go:211] docker network inspect old-k8s-version-952358 returned with exit code 1
	I1101 11:55:58.755228  713774 network_create.go:287] error running [docker network inspect old-k8s-version-952358]: docker network inspect old-k8s-version-952358: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-952358 not found
	I1101 11:55:58.755243  713774 network_create.go:289] output of [docker network inspect old-k8s-version-952358]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-952358 not found
	
	** /stderr **
	I1101 11:55:58.755353  713774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:55:58.772270  713774 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fad877b9a6cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:a4:0d:8c:c4:a0} reservation:<nil>}
	I1101 11:55:58.772722  713774 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f319e39f8d0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:35:a5:64:2d:20} reservation:<nil>}
	I1101 11:55:58.773083  713774 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce7deea9bf12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:09:be:7b:bb:7b} reservation:<nil>}
	I1101 11:55:58.773282  713774 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-09a35ac85c63 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:b7:90:4e:a1:cb} reservation:<nil>}
	I1101 11:55:58.773800  713774 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1b5a0}
	I1101 11:55:58.773824  713774 network_create.go:124] attempt to create docker network old-k8s-version-952358 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 11:55:58.773890  713774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-952358 old-k8s-version-952358
	I1101 11:55:58.830191  713774 network_create.go:108] docker network old-k8s-version-952358 192.168.85.0/24 created
	I1101 11:55:58.830227  713774 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-952358" container
	I1101 11:55:58.830302  713774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 11:55:58.846981  713774 cli_runner.go:164] Run: docker volume create old-k8s-version-952358 --label name.minikube.sigs.k8s.io=old-k8s-version-952358 --label created_by.minikube.sigs.k8s.io=true
	I1101 11:55:58.864832  713774 oci.go:103] Successfully created a docker volume old-k8s-version-952358
	I1101 11:55:58.864927  713774 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-952358-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-952358 --entrypoint /usr/bin/test -v old-k8s-version-952358:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 11:55:59.419534  713774 oci.go:107] Successfully prepared a docker volume old-k8s-version-952358
	I1101 11:55:59.419583  713774 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 11:55:59.419603  713774 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 11:55:59.419687  713774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-952358:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 11:56:04.967785  713774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-952358:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.548059223s)
	I1101 11:56:04.967812  713774 kic.go:203] duration metric: took 5.548205703s to extract preloaded images to volume ...
	W1101 11:56:04.967945  713774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 11:56:04.968066  713774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 11:56:05.028553  713774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-952358 --name old-k8s-version-952358 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-952358 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-952358 --network old-k8s-version-952358 --ip 192.168.85.2 --volume old-k8s-version-952358:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 11:56:05.356202  713774 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Running}}
	I1101 11:56:05.375498  713774 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:56:05.407615  713774 cli_runner.go:164] Run: docker exec old-k8s-version-952358 stat /var/lib/dpkg/alternatives/iptables
	I1101 11:56:05.465570  713774 oci.go:144] the created container "old-k8s-version-952358" has a running status.
	I1101 11:56:05.465595  713774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa...
	I1101 11:56:05.674122  713774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 11:56:05.704744  713774 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:56:05.732068  713774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 11:56:05.732088  713774 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-952358 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 11:56:05.783468  713774 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:56:05.802339  713774 machine.go:94] provisionDockerMachine start ...
	I1101 11:56:05.802442  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:05.827075  713774 main.go:143] libmachine: Using SSH client type: native
	I1101 11:56:05.827414  713774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33775 <nil> <nil>}
	I1101 11:56:05.827424  713774 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:56:05.828554  713774 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 11:56:08.977797  713774 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-952358
	
	I1101 11:56:08.977873  713774 ubuntu.go:182] provisioning hostname "old-k8s-version-952358"
	I1101 11:56:08.977967  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:08.998106  713774 main.go:143] libmachine: Using SSH client type: native
	I1101 11:56:08.998411  713774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33775 <nil> <nil>}
	I1101 11:56:08.998426  713774 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-952358 && echo "old-k8s-version-952358" | sudo tee /etc/hostname
	I1101 11:56:09.160197  713774 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-952358
	
	I1101 11:56:09.160280  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:09.178322  713774 main.go:143] libmachine: Using SSH client type: native
	I1101 11:56:09.178671  713774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33775 <nil> <nil>}
	I1101 11:56:09.178699  713774 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-952358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-952358/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-952358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:56:09.329890  713774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:56:09.329919  713774 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:56:09.329942  713774 ubuntu.go:190] setting up certificates
	I1101 11:56:09.329991  713774 provision.go:84] configureAuth start
	I1101 11:56:09.330069  713774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-952358
	I1101 11:56:09.346959  713774 provision.go:143] copyHostCerts
	I1101 11:56:09.347034  713774 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:56:09.347049  713774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:56:09.347134  713774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:56:09.347228  713774 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:56:09.347239  713774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:56:09.347268  713774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:56:09.347335  713774 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:56:09.347345  713774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:56:09.347381  713774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:56:09.347467  713774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-952358 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-952358]
	I1101 11:56:09.767405  713774 provision.go:177] copyRemoteCerts
	I1101 11:56:09.767503  713774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:56:09.767580  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:09.789790  713774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:56:09.894587  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 11:56:09.912484  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 11:56:09.931045  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:56:09.949874  713774 provision.go:87] duration metric: took 619.861132ms to configureAuth
	I1101 11:56:09.949899  713774 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:56:09.950111  713774 config.go:182] Loaded profile config "old-k8s-version-952358": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 11:56:09.950236  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:09.970309  713774 main.go:143] libmachine: Using SSH client type: native
	I1101 11:56:09.970686  713774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33775 <nil> <nil>}
	I1101 11:56:09.970711  713774 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:56:10.261405  713774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:56:10.261452  713774 machine.go:97] duration metric: took 4.459094254s to provisionDockerMachine
	I1101 11:56:10.261463  713774 client.go:176] duration metric: took 11.542884414s to LocalClient.Create
	I1101 11:56:10.261481  713774 start.go:167] duration metric: took 11.542952214s to libmachine.API.Create "old-k8s-version-952358"
	I1101 11:56:10.261499  713774 start.go:293] postStartSetup for "old-k8s-version-952358" (driver="docker")
	I1101 11:56:10.261515  713774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:56:10.261594  713774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:56:10.261668  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:10.281012  713774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:56:10.386059  713774 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:56:10.389618  713774 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:56:10.389658  713774 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:56:10.389670  713774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:56:10.389778  713774 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:56:10.389872  713774 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:56:10.389988  713774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:56:10.397318  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:56:10.415703  713774 start.go:296] duration metric: took 154.18294ms for postStartSetup
	I1101 11:56:10.416145  713774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-952358
	I1101 11:56:10.433552  713774 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/config.json ...
	I1101 11:56:10.433919  713774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:56:10.433976  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:10.451336  713774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:56:10.556022  713774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:56:10.561423  713774 start.go:128] duration metric: took 11.846480937s to createHost
	I1101 11:56:10.561467  713774 start.go:83] releasing machines lock for "old-k8s-version-952358", held for 11.846632775s
	I1101 11:56:10.561561  713774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-952358
	I1101 11:56:10.579044  713774 ssh_runner.go:195] Run: cat /version.json
	I1101 11:56:10.579115  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:10.579380  713774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:56:10.579445  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:10.599757  713774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:56:10.601033  713774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:56:10.794447  713774 ssh_runner.go:195] Run: systemctl --version
	I1101 11:56:10.801045  713774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:56:10.836062  713774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:56:10.840276  713774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:56:10.840374  713774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:56:10.870657  713774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 11:56:10.870684  713774 start.go:496] detecting cgroup driver to use...
	I1101 11:56:10.870719  713774 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:56:10.870770  713774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:56:10.888295  713774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:56:10.902277  713774 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:56:10.902380  713774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:56:10.919001  713774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:56:10.938851  713774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:56:11.077198  713774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:56:11.215614  713774 docker.go:234] disabling docker service ...
	I1101 11:56:11.215729  713774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:56:11.240703  713774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:56:11.254372  713774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:56:11.369982  713774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:56:11.482342  713774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:56:11.495966  713774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:56:11.511468  713774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 11:56:11.511533  713774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:56:11.520452  713774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:56:11.520520  713774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:56:11.529566  713774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:56:11.538765  713774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:56:11.547490  713774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:56:11.555664  713774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:56:11.564418  713774 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:56:11.578285  713774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:56:11.587932  713774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:56:11.596500  713774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:56:11.610163  713774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:56:11.739156  713774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:56:11.851026  713774 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:56:11.851151  713774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:56:11.855429  713774 start.go:564] Will wait 60s for crictl version
	I1101 11:56:11.855494  713774 ssh_runner.go:195] Run: which crictl
	I1101 11:56:11.859133  713774 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:56:11.888470  713774 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:56:11.888566  713774 ssh_runner.go:195] Run: crio --version
	I1101 11:56:11.917614  713774 ssh_runner.go:195] Run: crio --version
	I1101 11:56:11.956086  713774 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 11:56:11.959085  713774 cli_runner.go:164] Run: docker network inspect old-k8s-version-952358 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:56:11.975297  713774 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 11:56:11.979224  713774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:56:11.989202  713774 kubeadm.go:884] updating cluster {Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:56:11.989322  713774 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 11:56:11.989390  713774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:56:12.032717  713774 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:56:12.032742  713774 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:56:12.032803  713774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:56:12.064766  713774 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:56:12.064791  713774 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:56:12.064800  713774 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1101 11:56:12.064903  713774 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-952358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:56:12.064990  713774 ssh_runner.go:195] Run: crio config
	I1101 11:56:12.119744  713774 cni.go:84] Creating CNI manager for ""
	I1101 11:56:12.119767  713774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:56:12.119783  713774 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:56:12.119856  713774 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-952358 NodeName:old-k8s-version-952358 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:56:12.120008  713774 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-952358"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:56:12.120087  713774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 11:56:12.128225  713774 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:56:12.128300  713774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:56:12.136280  713774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 11:56:12.153798  713774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:56:12.168943  713774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 11:56:12.182467  713774 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:56:12.186154  713774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:56:12.196443  713774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:56:12.316643  713774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:56:12.333034  713774 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358 for IP: 192.168.85.2
	I1101 11:56:12.333109  713774 certs.go:195] generating shared ca certs ...
	I1101 11:56:12.333140  713774 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:56:12.333348  713774 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:56:12.333434  713774 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:56:12.333458  713774 certs.go:257] generating profile certs ...
	I1101 11:56:12.333549  713774 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.key
	I1101 11:56:12.333589  713774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt with IP's: []
	I1101 11:56:12.982875  713774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt ...
	I1101 11:56:12.982912  713774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: {Name:mk3f780e0e514dc4d598ebcbd603796cdd603c90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:56:12.983116  713774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.key ...
	I1101 11:56:12.983131  713774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.key: {Name:mka3fabb6963f1a45983990bc9856ddabbcb07a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:56:12.983233  713774 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key.1ce2c540
	I1101 11:56:12.983252  713774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.crt.1ce2c540 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 11:56:13.284693  713774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.crt.1ce2c540 ...
	I1101 11:56:13.284728  713774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.crt.1ce2c540: {Name:mkfa02a2600fd150317e3bb65042874f5be34dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:56:13.284929  713774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key.1ce2c540 ...
	I1101 11:56:13.284945  713774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key.1ce2c540: {Name:mk933978201486c48e40ff3de9e9318de1fe998d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:56:13.285044  713774 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.crt.1ce2c540 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.crt
	I1101 11:56:13.285125  713774 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key.1ce2c540 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key
	I1101 11:56:13.285237  713774 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.key
	I1101 11:56:13.285257  713774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.crt with IP's: []
	I1101 11:56:13.410206  713774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.crt ...
	I1101 11:56:13.410238  713774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.crt: {Name:mk636c5adf2a2e712c2e7967ffaaf2f59da9dd9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:56:13.410426  713774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.key ...
	I1101 11:56:13.410441  713774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.key: {Name:mke1c5422c5cad6942a164aae9344f67332c9075 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:56:13.410642  713774 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:56:13.410686  713774 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:56:13.410701  713774 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:56:13.410729  713774 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:56:13.410757  713774 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:56:13.410784  713774 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:56:13.410830  713774 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:56:13.411391  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:56:13.429601  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:56:13.448288  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:56:13.466022  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:56:13.484596  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 11:56:13.502885  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:56:13.521534  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:56:13.539349  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:56:13.557788  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:56:13.577559  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:56:13.597445  713774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:56:13.620225  713774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:56:13.633659  713774 ssh_runner.go:195] Run: openssl version
	I1101 11:56:13.640431  713774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:56:13.649104  713774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:56:13.652794  713774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:56:13.652908  713774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:56:13.696503  713774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:56:13.705502  713774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:56:13.713860  713774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:56:13.717887  713774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:56:13.717980  713774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:56:13.761663  713774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:56:13.771025  713774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:56:13.780182  713774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:56:13.784224  713774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:56:13.784291  713774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:56:13.825965  713774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:56:13.834796  713774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:56:13.838744  713774 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:56:13.838801  713774 kubeadm.go:401] StartCluster: {Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:56:13.838881  713774 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:56:13.838946  713774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:56:13.869413  713774 cri.go:89] found id: ""
	I1101 11:56:13.869494  713774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:56:13.877844  713774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:56:13.885978  713774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 11:56:13.886055  713774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:56:13.894393  713774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:56:13.894415  713774 kubeadm.go:158] found existing configuration files:
	
	I1101 11:56:13.894467  713774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:56:13.904618  713774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:56:13.904686  713774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:56:13.913092  713774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:56:13.923275  713774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:56:13.923341  713774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:56:13.932051  713774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:56:13.942372  713774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:56:13.942434  713774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:56:13.951301  713774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:56:13.963591  713774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:56:13.963711  713774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:56:13.976264  713774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 11:56:14.029991  713774 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1101 11:56:14.030139  713774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 11:56:14.074108  713774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 11:56:14.074207  713774 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 11:56:14.074268  713774 kubeadm.go:319] OS: Linux
	I1101 11:56:14.074333  713774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 11:56:14.074399  713774 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 11:56:14.074464  713774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 11:56:14.074531  713774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 11:56:14.074603  713774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 11:56:14.074678  713774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 11:56:14.074761  713774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 11:56:14.074828  713774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 11:56:14.074894  713774 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 11:56:14.159632  713774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 11:56:14.159781  713774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 11:56:14.159913  713774 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 11:56:14.314093  713774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 11:56:14.319462  713774 out.go:252]   - Generating certificates and keys ...
	I1101 11:56:14.319560  713774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 11:56:14.319673  713774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 11:56:14.608022  713774 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 11:56:14.844574  713774 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 11:56:15.110009  713774 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 11:56:15.727795  713774 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 11:56:15.871983  713774 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 11:56:15.872360  713774 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-952358] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 11:56:16.230247  713774 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 11:56:16.230537  713774 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-952358] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 11:56:16.480697  713774 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 11:56:16.707969  713774 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 11:56:17.084107  713774 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 11:56:17.084409  713774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 11:56:17.435773  713774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 11:56:17.959767  713774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 11:56:19.021107  713774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 11:56:19.796591  713774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 11:56:19.799006  713774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 11:56:19.801741  713774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 11:56:19.805284  713774 out.go:252]   - Booting up control plane ...
	I1101 11:56:19.805399  713774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 11:56:19.805489  713774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 11:56:19.805564  713774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 11:56:19.822927  713774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 11:56:19.823788  713774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 11:56:19.823976  713774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 11:56:19.954904  713774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 11:56:26.957354  713774 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.002565 seconds
	I1101 11:56:26.957487  713774 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 11:56:26.975586  713774 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 11:56:27.512336  713774 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 11:56:27.512557  713774 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-952358 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 11:56:28.027651  713774 kubeadm.go:319] [bootstrap-token] Using token: ucwtdy.7qw7pqtl2caa2c4f
	I1101 11:56:28.030563  713774 out.go:252]   - Configuring RBAC rules ...
	I1101 11:56:28.030700  713774 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 11:56:28.041679  713774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 11:56:28.050883  713774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 11:56:28.055486  713774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 11:56:28.060182  713774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 11:56:28.067391  713774 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 11:56:28.081801  713774 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 11:56:28.314545  713774 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 11:56:28.447598  713774 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 11:56:28.449534  713774 kubeadm.go:319] 
	I1101 11:56:28.449611  713774 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 11:56:28.449617  713774 kubeadm.go:319] 
	I1101 11:56:28.449739  713774 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 11:56:28.449746  713774 kubeadm.go:319] 
	I1101 11:56:28.449772  713774 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 11:56:28.449859  713774 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 11:56:28.449914  713774 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 11:56:28.449919  713774 kubeadm.go:319] 
	I1101 11:56:28.449975  713774 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 11:56:28.449979  713774 kubeadm.go:319] 
	I1101 11:56:28.450029  713774 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 11:56:28.450041  713774 kubeadm.go:319] 
	I1101 11:56:28.450095  713774 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 11:56:28.450174  713774 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 11:56:28.450246  713774 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 11:56:28.450250  713774 kubeadm.go:319] 
	I1101 11:56:28.450348  713774 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 11:56:28.450437  713774 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 11:56:28.450443  713774 kubeadm.go:319] 
	I1101 11:56:28.450530  713774 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ucwtdy.7qw7pqtl2caa2c4f \
	I1101 11:56:28.450638  713774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 11:56:28.450683  713774 kubeadm.go:319] 	--control-plane 
	I1101 11:56:28.450689  713774 kubeadm.go:319] 
	I1101 11:56:28.450779  713774 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 11:56:28.450783  713774 kubeadm.go:319] 
	I1101 11:56:28.450870  713774 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ucwtdy.7qw7pqtl2caa2c4f \
	I1101 11:56:28.450978  713774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 11:56:28.455726  713774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 11:56:28.455859  713774 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 11:56:28.455876  713774 cni.go:84] Creating CNI manager for ""
	I1101 11:56:28.455889  713774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:56:28.459204  713774 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 11:56:28.462198  713774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 11:56:28.468036  713774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1101 11:56:28.468074  713774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 11:56:28.510594  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 11:56:29.510410  713774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:56:29.510564  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:29.510641  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-952358 minikube.k8s.io/updated_at=2025_11_01T11_56_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=old-k8s-version-952358 minikube.k8s.io/primary=true
	I1101 11:56:29.686441  713774 ops.go:34] apiserver oom_adj: -16
	I1101 11:56:29.713025  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:30.213756  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:30.713212  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:31.214039  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:31.713532  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:32.213308  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:32.713931  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:33.213129  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:33.713112  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:34.213896  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:34.713142  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:35.213919  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:35.714108  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:36.213974  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:36.713160  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:37.213193  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:37.714082  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:38.213886  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:38.714009  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:39.213146  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:39.713967  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:40.213378  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:40.713430  713774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:56:40.815329  713774 kubeadm.go:1114] duration metric: took 11.304810669s to wait for elevateKubeSystemPrivileges
	I1101 11:56:40.815360  713774 kubeadm.go:403] duration metric: took 26.976565402s to StartCluster
	I1101 11:56:40.815377  713774 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:56:40.815450  713774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:56:40.816413  713774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:56:40.816634  713774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 11:56:40.816661  713774 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:56:40.816716  713774 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-952358"
	I1101 11:56:40.816644  713774 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:56:40.816730  713774 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-952358"
	I1101 11:56:40.816914  713774 config.go:182] Loaded profile config "old-k8s-version-952358": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 11:56:40.816752  713774 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:56:40.816947  713774 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-952358"
	I1101 11:56:40.816957  713774 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-952358"
	I1101 11:56:40.817262  713774 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:56:40.817435  713774 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:56:40.820516  713774 out.go:179] * Verifying Kubernetes components...
	I1101 11:56:40.823349  713774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:56:40.855161  713774 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:56:40.857016  713774 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-952358"
	I1101 11:56:40.857055  713774 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:56:40.857523  713774 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:56:40.861246  713774 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:56:40.861268  713774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:56:40.861339  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:40.904322  713774 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:56:40.904352  713774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:56:40.904423  713774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:56:40.909632  713774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:56:40.931339  713774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:56:41.175341  713774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 11:56:41.175441  713774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:56:41.248104  713774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:56:41.268294  713774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:56:41.827391  713774 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 11:56:41.828340  713774 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-952358" to be "Ready" ...
	I1101 11:56:42.145411  713774 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 11:56:42.148407  713774 addons.go:515] duration metric: took 1.331729979s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 11:56:42.331872  713774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-952358" context rescaled to 1 replicas
	W1101 11:56:43.831717  713774 node_ready.go:57] node "old-k8s-version-952358" has "Ready":"False" status (will retry)
	W1101 11:56:45.831991  713774 node_ready.go:57] node "old-k8s-version-952358" has "Ready":"False" status (will retry)
	W1101 11:56:48.331388  713774 node_ready.go:57] node "old-k8s-version-952358" has "Ready":"False" status (will retry)
	W1101 11:56:50.831276  713774 node_ready.go:57] node "old-k8s-version-952358" has "Ready":"False" status (will retry)
	W1101 11:56:52.831924  713774 node_ready.go:57] node "old-k8s-version-952358" has "Ready":"False" status (will retry)
	W1101 11:56:55.332172  713774 node_ready.go:57] node "old-k8s-version-952358" has "Ready":"False" status (will retry)
	I1101 11:56:55.845040  713774 node_ready.go:49] node "old-k8s-version-952358" is "Ready"
	I1101 11:56:55.845065  713774 node_ready.go:38] duration metric: took 14.016695731s for node "old-k8s-version-952358" to be "Ready" ...
	I1101 11:56:55.845079  713774 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:56:55.845148  713774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:56:55.867066  713774 api_server.go:72] duration metric: took 15.050314942s to wait for apiserver process to appear ...
	I1101 11:56:55.867089  713774 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:56:55.867108  713774 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 11:56:55.878257  713774 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 11:56:55.880158  713774 api_server.go:141] control plane version: v1.28.0
	I1101 11:56:55.880181  713774 api_server.go:131] duration metric: took 13.08601ms to wait for apiserver health ...
	I1101 11:56:55.880190  713774 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:56:55.888356  713774 system_pods.go:59] 8 kube-system pods found
	I1101 11:56:55.892104  713774 system_pods.go:61] "coredns-5dd5756b68-pmb27" [5ed95095-99da-4744-9e27-3c17af6a824a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:56:55.893539  713774 system_pods.go:61] "etcd-old-k8s-version-952358" [47a39b81-001d-4c6f-8c0d-c5f3f4785421] Running
	I1101 11:56:55.893611  713774 system_pods.go:61] "kindnet-sn7mz" [552a2264-bdd9-4b5f-b48c-369e6eff47aa] Running
	I1101 11:56:55.893636  713774 system_pods.go:61] "kube-apiserver-old-k8s-version-952358" [e51ba789-bf75-410a-95f8-3d02157e11b5] Running
	I1101 11:56:55.893658  713774 system_pods.go:61] "kube-controller-manager-old-k8s-version-952358" [e54caac4-1422-4a20-9dbb-fbceea3bc4db] Running
	I1101 11:56:55.893679  713774 system_pods.go:61] "kube-proxy-kmxd8" [5424cb6f-ae01-4a4c-a66d-4c079aef46c6] Running
	I1101 11:56:55.893757  713774 system_pods.go:61] "kube-scheduler-old-k8s-version-952358" [4e5fe046-ae08-40a7-825e-fa77da451c18] Running
	I1101 11:56:55.893790  713774 system_pods.go:61] "storage-provisioner" [caedd5ef-fa47-4b4e-b104-945d4b554f7f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:56:55.893813  713774 system_pods.go:74] duration metric: took 13.615765ms to wait for pod list to return data ...
	I1101 11:56:55.893839  713774 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:56:55.897383  713774 default_sa.go:45] found service account: "default"
	I1101 11:56:55.897444  713774 default_sa.go:55] duration metric: took 3.581839ms for default service account to be created ...
	I1101 11:56:55.897469  713774 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:56:55.902873  713774 system_pods.go:86] 8 kube-system pods found
	I1101 11:56:55.902954  713774 system_pods.go:89] "coredns-5dd5756b68-pmb27" [5ed95095-99da-4744-9e27-3c17af6a824a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:56:55.902981  713774 system_pods.go:89] "etcd-old-k8s-version-952358" [47a39b81-001d-4c6f-8c0d-c5f3f4785421] Running
	I1101 11:56:55.903003  713774 system_pods.go:89] "kindnet-sn7mz" [552a2264-bdd9-4b5f-b48c-369e6eff47aa] Running
	I1101 11:56:55.903053  713774 system_pods.go:89] "kube-apiserver-old-k8s-version-952358" [e51ba789-bf75-410a-95f8-3d02157e11b5] Running
	I1101 11:56:55.903073  713774 system_pods.go:89] "kube-controller-manager-old-k8s-version-952358" [e54caac4-1422-4a20-9dbb-fbceea3bc4db] Running
	I1101 11:56:55.903094  713774 system_pods.go:89] "kube-proxy-kmxd8" [5424cb6f-ae01-4a4c-a66d-4c079aef46c6] Running
	I1101 11:56:55.903126  713774 system_pods.go:89] "kube-scheduler-old-k8s-version-952358" [4e5fe046-ae08-40a7-825e-fa77da451c18] Running
	I1101 11:56:55.903150  713774 system_pods.go:89] "storage-provisioner" [caedd5ef-fa47-4b4e-b104-945d4b554f7f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:56:55.903195  713774 retry.go:31] will retry after 231.430825ms: missing components: kube-dns
	I1101 11:56:56.139483  713774 system_pods.go:86] 8 kube-system pods found
	I1101 11:56:56.139519  713774 system_pods.go:89] "coredns-5dd5756b68-pmb27" [5ed95095-99da-4744-9e27-3c17af6a824a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:56:56.139526  713774 system_pods.go:89] "etcd-old-k8s-version-952358" [47a39b81-001d-4c6f-8c0d-c5f3f4785421] Running
	I1101 11:56:56.139533  713774 system_pods.go:89] "kindnet-sn7mz" [552a2264-bdd9-4b5f-b48c-369e6eff47aa] Running
	I1101 11:56:56.139570  713774 system_pods.go:89] "kube-apiserver-old-k8s-version-952358" [e51ba789-bf75-410a-95f8-3d02157e11b5] Running
	I1101 11:56:56.139583  713774 system_pods.go:89] "kube-controller-manager-old-k8s-version-952358" [e54caac4-1422-4a20-9dbb-fbceea3bc4db] Running
	I1101 11:56:56.139587  713774 system_pods.go:89] "kube-proxy-kmxd8" [5424cb6f-ae01-4a4c-a66d-4c079aef46c6] Running
	I1101 11:56:56.139591  713774 system_pods.go:89] "kube-scheduler-old-k8s-version-952358" [4e5fe046-ae08-40a7-825e-fa77da451c18] Running
	I1101 11:56:56.139597  713774 system_pods.go:89] "storage-provisioner" [caedd5ef-fa47-4b4e-b104-945d4b554f7f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:56:56.139613  713774 retry.go:31] will retry after 263.992281ms: missing components: kube-dns
	I1101 11:56:56.407684  713774 system_pods.go:86] 8 kube-system pods found
	I1101 11:56:56.407717  713774 system_pods.go:89] "coredns-5dd5756b68-pmb27" [5ed95095-99da-4744-9e27-3c17af6a824a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:56:56.407723  713774 system_pods.go:89] "etcd-old-k8s-version-952358" [47a39b81-001d-4c6f-8c0d-c5f3f4785421] Running
	I1101 11:56:56.407729  713774 system_pods.go:89] "kindnet-sn7mz" [552a2264-bdd9-4b5f-b48c-369e6eff47aa] Running
	I1101 11:56:56.407734  713774 system_pods.go:89] "kube-apiserver-old-k8s-version-952358" [e51ba789-bf75-410a-95f8-3d02157e11b5] Running
	I1101 11:56:56.407739  713774 system_pods.go:89] "kube-controller-manager-old-k8s-version-952358" [e54caac4-1422-4a20-9dbb-fbceea3bc4db] Running
	I1101 11:56:56.407743  713774 system_pods.go:89] "kube-proxy-kmxd8" [5424cb6f-ae01-4a4c-a66d-4c079aef46c6] Running
	I1101 11:56:56.407748  713774 system_pods.go:89] "kube-scheduler-old-k8s-version-952358" [4e5fe046-ae08-40a7-825e-fa77da451c18] Running
	I1101 11:56:56.407753  713774 system_pods.go:89] "storage-provisioner" [caedd5ef-fa47-4b4e-b104-945d4b554f7f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:56:56.407768  713774 retry.go:31] will retry after 425.427367ms: missing components: kube-dns
	I1101 11:56:56.836905  713774 system_pods.go:86] 8 kube-system pods found
	I1101 11:56:56.836934  713774 system_pods.go:89] "coredns-5dd5756b68-pmb27" [5ed95095-99da-4744-9e27-3c17af6a824a] Running
	I1101 11:56:56.836941  713774 system_pods.go:89] "etcd-old-k8s-version-952358" [47a39b81-001d-4c6f-8c0d-c5f3f4785421] Running
	I1101 11:56:56.836946  713774 system_pods.go:89] "kindnet-sn7mz" [552a2264-bdd9-4b5f-b48c-369e6eff47aa] Running
	I1101 11:56:56.836951  713774 system_pods.go:89] "kube-apiserver-old-k8s-version-952358" [e51ba789-bf75-410a-95f8-3d02157e11b5] Running
	I1101 11:56:56.836956  713774 system_pods.go:89] "kube-controller-manager-old-k8s-version-952358" [e54caac4-1422-4a20-9dbb-fbceea3bc4db] Running
	I1101 11:56:56.836960  713774 system_pods.go:89] "kube-proxy-kmxd8" [5424cb6f-ae01-4a4c-a66d-4c079aef46c6] Running
	I1101 11:56:56.836965  713774 system_pods.go:89] "kube-scheduler-old-k8s-version-952358" [4e5fe046-ae08-40a7-825e-fa77da451c18] Running
	I1101 11:56:56.836969  713774 system_pods.go:89] "storage-provisioner" [caedd5ef-fa47-4b4e-b104-945d4b554f7f] Running
	I1101 11:56:56.836977  713774 system_pods.go:126] duration metric: took 939.480228ms to wait for k8s-apps to be running ...
	I1101 11:56:56.836990  713774 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:56:56.837046  713774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:56:56.850703  713774 system_svc.go:56] duration metric: took 13.703118ms WaitForService to wait for kubelet
	I1101 11:56:56.850731  713774 kubeadm.go:587] duration metric: took 16.033985466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:56:56.850752  713774 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:56:56.853451  713774 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:56:56.853484  713774 node_conditions.go:123] node cpu capacity is 2
	I1101 11:56:56.853498  713774 node_conditions.go:105] duration metric: took 2.741123ms to run NodePressure ...
	I1101 11:56:56.853510  713774 start.go:242] waiting for startup goroutines ...
	I1101 11:56:56.853518  713774 start.go:247] waiting for cluster config update ...
	I1101 11:56:56.853528  713774 start.go:256] writing updated cluster config ...
	I1101 11:56:56.853914  713774 ssh_runner.go:195] Run: rm -f paused
	I1101 11:56:56.857757  713774 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:56:56.862332  713774 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pmb27" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:56.867637  713774 pod_ready.go:94] pod "coredns-5dd5756b68-pmb27" is "Ready"
	I1101 11:56:56.867666  713774 pod_ready.go:86] duration metric: took 5.306243ms for pod "coredns-5dd5756b68-pmb27" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:56.870967  713774 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:56.876200  713774 pod_ready.go:94] pod "etcd-old-k8s-version-952358" is "Ready"
	I1101 11:56:56.876231  713774 pod_ready.go:86] duration metric: took 5.239526ms for pod "etcd-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:56.879650  713774 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:56.884775  713774 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-952358" is "Ready"
	I1101 11:56:56.884799  713774 pod_ready.go:86] duration metric: took 5.122766ms for pod "kube-apiserver-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:56.888144  713774 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:57.262858  713774 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-952358" is "Ready"
	I1101 11:56:57.262886  713774 pod_ready.go:86] duration metric: took 374.713798ms for pod "kube-controller-manager-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:57.462562  713774 pod_ready.go:83] waiting for pod "kube-proxy-kmxd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:57.861603  713774 pod_ready.go:94] pod "kube-proxy-kmxd8" is "Ready"
	I1101 11:56:57.861629  713774 pod_ready.go:86] duration metric: took 399.040848ms for pod "kube-proxy-kmxd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:58.062562  713774 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:58.462440  713774 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-952358" is "Ready"
	I1101 11:56:58.462467  713774 pod_ready.go:86] duration metric: took 399.880531ms for pod "kube-scheduler-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:56:58.462480  713774 pod_ready.go:40] duration metric: took 1.604641075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:56:58.514199  713774 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 11:56:58.517165  713774 out.go:203] 
	W1101 11:56:58.520120  713774 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 11:56:58.522962  713774 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 11:56:58.526706  713774 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-952358" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:56:55 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:55.820994735Z" level=info msg="Created container 761afbcc33b20a9384353f1f058bbc3b583d97ed477466a59342d1347927f200: kube-system/coredns-5dd5756b68-pmb27/coredns" id=f85f2a14-b8c4-4132-b39b-0b6e24e6047a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:56:55 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:55.822014473Z" level=info msg="Starting container: 761afbcc33b20a9384353f1f058bbc3b583d97ed477466a59342d1347927f200" id=a9772214-3e66-4899-be2b-a241063b25cd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:56:55 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:55.825549296Z" level=info msg="Started container" PID=1915 containerID=761afbcc33b20a9384353f1f058bbc3b583d97ed477466a59342d1347927f200 description=kube-system/coredns-5dd5756b68-pmb27/coredns id=a9772214-3e66-4899-be2b-a241063b25cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd5ce2a063ee5dceb1d4364e2de2292b53a9490198468cfa88f37740b3312899
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.047956465Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c19f94f5-ab33-4dcc-9ce5-b082214a7ec0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.048030927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.054187856Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6184088f974be919024dd61cc8b92c2463bafea2166e11d9c5f7d0f64344bfd6 UID:a2cae1c5-c388-493d-93c1-2ea919b16ea1 NetNS:/var/run/netns/570fc543-af8a-4e56-80f8-5b0db77f026d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079e20}] Aliases:map[]}"
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.054225337Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.064952195Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6184088f974be919024dd61cc8b92c2463bafea2166e11d9c5f7d0f64344bfd6 UID:a2cae1c5-c388-493d-93c1-2ea919b16ea1 NetNS:/var/run/netns/570fc543-af8a-4e56-80f8-5b0db77f026d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079e20}] Aliases:map[]}"
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.065109326Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.070078153Z" level=info msg="Ran pod sandbox 6184088f974be919024dd61cc8b92c2463bafea2166e11d9c5f7d0f64344bfd6 with infra container: default/busybox/POD" id=c19f94f5-ab33-4dcc-9ce5-b082214a7ec0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.071150224Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7d02ff83-1110-4bee-aa45-020106781da0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.071285955Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7d02ff83-1110-4bee-aa45-020106781da0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.071387175Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7d02ff83-1110-4bee-aa45-020106781da0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.071951408Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7eb3e784-44de-4373-995c-45a6dfa2e0ba name=/runtime.v1.ImageService/PullImage
	Nov 01 11:56:59 old-k8s-version-952358 crio[838]: time="2025-11-01T11:56:59.074816511Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 11:57:01 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:01.15440032Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=7eb3e784-44de-4373-995c-45a6dfa2e0ba name=/runtime.v1.ImageService/PullImage
	Nov 01 11:57:01 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:01.157461258Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=40924fbd-d7ac-4098-b97a-302a97524039 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:57:01 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:01.160211537Z" level=info msg="Creating container: default/busybox/busybox" id=91d236a1-fec5-43a4-9894-bf100c62b31a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:57:01 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:01.160339999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:57:01 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:01.165425906Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:57:01 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:01.166268182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:57:01 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:01.184166956Z" level=info msg="Created container d6cd70f2da6213af60c765f8aa491b3414165a899bed7ffaf595f337a1002a37: default/busybox/busybox" id=91d236a1-fec5-43a4-9894-bf100c62b31a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:57:01 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:01.185514345Z" level=info msg="Starting container: d6cd70f2da6213af60c765f8aa491b3414165a899bed7ffaf595f337a1002a37" id=ff3a5b5f-8f99-4544-a695-141d301eebc6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:57:01 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:01.187856595Z" level=info msg="Started container" PID=1970 containerID=d6cd70f2da6213af60c765f8aa491b3414165a899bed7ffaf595f337a1002a37 description=default/busybox/busybox id=ff3a5b5f-8f99-4544-a695-141d301eebc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6184088f974be919024dd61cc8b92c2463bafea2166e11d9c5f7d0f64344bfd6
	Nov 01 11:57:07 old-k8s-version-952358 crio[838]: time="2025-11-01T11:57:07.976548074Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d6cd70f2da621       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   6184088f974be       busybox                                          default
	761afbcc33b20       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   dd5ce2a063ee5       coredns-5dd5756b68-pmb27                         kube-system
	0850e5c89fb39       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   e6bb586f724cf       storage-provisioner                              kube-system
	1b4fe79252f8e       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   db81024170d2d       kindnet-sn7mz                                    kube-system
	2060322b269f3       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   25271f54f01ca       kube-proxy-kmxd8                                 kube-system
	7ee1c6ab92026       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   79ccfec8a557e       etcd-old-k8s-version-952358                      kube-system
	ae62eb6630448       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   be9c21fdbc7f8       kube-apiserver-old-k8s-version-952358            kube-system
	cede1318d4e66       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   925336c3d45d9       kube-controller-manager-old-k8s-version-952358   kube-system
	45dc3c84906a0       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   3f0a4a622d259       kube-scheduler-old-k8s-version-952358            kube-system
	
	
	==> coredns [761afbcc33b20a9384353f1f058bbc3b583d97ed477466a59342d1347927f200] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46295 - 108 "HINFO IN 8959353347690960909.6227995469318556496. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030745684s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-952358
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-952358
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=old-k8s-version-952358
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_56_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-952358
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:57:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:56:59 +0000   Sat, 01 Nov 2025 11:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:56:59 +0000   Sat, 01 Nov 2025 11:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:56:59 +0000   Sat, 01 Nov 2025 11:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:56:59 +0000   Sat, 01 Nov 2025 11:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-952358
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                dbeefb29-03d1-48b6-93d2-8db0a71a3a9e
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-pmb27                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-952358                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-sn7mz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-952358             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-952358    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-kmxd8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-952358             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-952358 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-952358 event: Registered Node old-k8s-version-952358 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-952358 status is now: NodeReady
	
	
	==> dmesg <==
	[ +46.322577] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:34] overlayfs: idmapped layers are currently not supported
	[ +35.784283] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7ee1c6ab92026cfd6d68684c928e5923c7b2dcebeb4fa8164a6cc0e3f9bb4a47] <==
	{"level":"info","ts":"2025-11-01T11:56:21.69453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-01T11:56:21.694904Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-01T11:56:21.695212Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T11:56:21.69529Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T11:56:21.695387Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T11:56:21.69828Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T11:56:21.69836Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T11:56:22.644868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-01T11:56:22.644992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-01T11:56:22.645031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-01T11:56:22.645083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-01T11:56:22.645116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T11:56:22.645166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-01T11:56:22.645207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T11:56:22.647116Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T11:56:22.648636Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-952358 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T11:56:22.648712Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:56:22.65003Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T11:56:22.650148Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:56:22.650564Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T11:56:22.651885Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T11:56:22.651959Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T11:56:22.651519Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-01T11:56:22.661824Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T11:56:22.661927Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:57:09 up  3:39,  0 user,  load average: 3.31, 3.42, 2.65
	Linux old-k8s-version-952358 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b4fe79252f8e20898dc231c9ab81b7f56389f1483b3a010443d643b3a6c8143] <==
	I1101 11:56:44.726092       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 11:56:44.726479       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 11:56:44.726636       1 main.go:148] setting mtu 1500 for CNI 
	I1101 11:56:44.726676       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 11:56:44.726716       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T11:56:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 11:56:45.021613       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 11:56:45.021739       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 11:56:45.021783       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 11:56:45.024031       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 11:56:45.228271       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 11:56:45.317805       1 metrics.go:72] Registering metrics
	I1101 11:56:45.318032       1 controller.go:711] "Syncing nftables rules"
	I1101 11:56:54.936048       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 11:56:54.936099       1 main.go:301] handling current node
	I1101 11:57:04.929243       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 11:57:04.929276       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ae62eb66304484389958541b8a73eaaa1627dfbcf05761814176b410fd2e8156] <==
	I1101 11:56:25.489631       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 11:56:25.489809       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 11:56:25.491660       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 11:56:25.492397       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 11:56:25.492481       1 aggregator.go:166] initial CRD sync complete...
	I1101 11:56:25.492490       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 11:56:25.492496       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 11:56:25.492502       1 cache.go:39] Caches are synced for autoregister controller
	I1101 11:56:25.493747       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 11:56:25.690958       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 11:56:26.194966       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 11:56:26.201005       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 11:56:26.201099       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 11:56:26.832066       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:56:26.876584       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:56:27.029952       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 11:56:27.038813       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 11:56:27.040014       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 11:56:27.045005       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 11:56:27.382052       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 11:56:28.293669       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 11:56:28.312936       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 11:56:28.327133       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 11:56:41.106046       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 11:56:41.172357       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [cede1318d4e66f59068291c13c3e4a535317331f8e8a7e28d5b16fb6b89a2de3] <==
	I1101 11:56:40.419393       1 event.go:307] "Event occurred" object="old-k8s-version-952358" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-952358 event: Registered Node old-k8s-version-952358 in Controller"
	I1101 11:56:40.427797       1 shared_informer.go:318] Caches are synced for daemon sets
	I1101 11:56:40.806953       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 11:56:40.821607       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 11:56:40.821639       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 11:56:41.125202       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 11:56:41.203165       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-sn7mz"
	I1101 11:56:41.221240       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kmxd8"
	I1101 11:56:41.346935       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6wnhz"
	I1101 11:56:41.375149       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pmb27"
	I1101 11:56:41.406777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="282.36106ms"
	I1101 11:56:41.439760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.935989ms"
	I1101 11:56:41.448662       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="242.686µs"
	I1101 11:56:41.449033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="202.432µs"
	I1101 11:56:41.468040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.753µs"
	I1101 11:56:41.882861       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 11:56:41.914995       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-6wnhz"
	I1101 11:56:41.948555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.268275ms"
	I1101 11:56:42.031748       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.142708ms"
	I1101 11:56:42.031868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.786µs"
	I1101 11:56:55.421048       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1101 11:56:55.431134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.072µs"
	I1101 11:56:55.460415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.096µs"
	I1101 11:56:56.644335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.526185ms"
	I1101 11:56:56.645663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.663µs"
	
	
	==> kube-proxy [2060322b269f3db60774ee14de2bfca179fc90b3d6b47989f028fd968143411e] <==
	I1101 11:56:42.308073       1 server_others.go:69] "Using iptables proxy"
	I1101 11:56:42.325484       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1101 11:56:42.354113       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:56:42.355826       1 server_others.go:152] "Using iptables Proxier"
	I1101 11:56:42.355950       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 11:56:42.355969       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 11:56:42.355992       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 11:56:42.356210       1 server.go:846] "Version info" version="v1.28.0"
	I1101 11:56:42.356225       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:56:42.357132       1 config.go:188] "Starting service config controller"
	I1101 11:56:42.357232       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 11:56:42.357293       1 config.go:97] "Starting endpoint slice config controller"
	I1101 11:56:42.357320       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 11:56:42.358126       1 config.go:315] "Starting node config controller"
	I1101 11:56:42.358187       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 11:56:42.458160       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 11:56:42.458163       1 shared_informer.go:318] Caches are synced for service config
	I1101 11:56:42.458402       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [45dc3c84906a004ee757b09966cfa1d0604890b9b3d3558936b575a250ecf97f] <==
	W1101 11:56:25.461902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 11:56:25.461934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 11:56:25.461983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 11:56:25.462000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 11:56:25.461985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 11:56:25.462020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 11:56:25.462102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 11:56:25.462158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 11:56:25.464997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 11:56:25.465033       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 11:56:25.465209       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 11:56:25.465228       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 11:56:26.275769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 11:56:26.275805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 11:56:26.299607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 11:56:26.299642       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1101 11:56:26.386022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 11:56:26.386061       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 11:56:26.445875       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 11:56:26.446072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 11:56:26.450280       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 11:56:26.450398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 11:56:26.730859       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 11:56:26.730974       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1101 11:56:29.447876       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 11:56:41 old-k8s-version-952358 kubelet[1352]: E1101 11:56:41.270308    1352 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-952358" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-952358' and this object
	Nov 01 11:56:41 old-k8s-version-952358 kubelet[1352]: I1101 11:56:41.273920    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552a2264-bdd9-4b5f-b48c-369e6eff47aa-xtables-lock\") pod \"kindnet-sn7mz\" (UID: \"552a2264-bdd9-4b5f-b48c-369e6eff47aa\") " pod="kube-system/kindnet-sn7mz"
	Nov 01 11:56:41 old-k8s-version-952358 kubelet[1352]: I1101 11:56:41.273987    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/552a2264-bdd9-4b5f-b48c-369e6eff47aa-cni-cfg\") pod \"kindnet-sn7mz\" (UID: \"552a2264-bdd9-4b5f-b48c-369e6eff47aa\") " pod="kube-system/kindnet-sn7mz"
	Nov 01 11:56:41 old-k8s-version-952358 kubelet[1352]: I1101 11:56:41.274051    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6hhw\" (UniqueName: \"kubernetes.io/projected/552a2264-bdd9-4b5f-b48c-369e6eff47aa-kube-api-access-c6hhw\") pod \"kindnet-sn7mz\" (UID: \"552a2264-bdd9-4b5f-b48c-369e6eff47aa\") " pod="kube-system/kindnet-sn7mz"
	Nov 01 11:56:41 old-k8s-version-952358 kubelet[1352]: I1101 11:56:41.274111    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5424cb6f-ae01-4a4c-a66d-4c079aef46c6-xtables-lock\") pod \"kube-proxy-kmxd8\" (UID: \"5424cb6f-ae01-4a4c-a66d-4c079aef46c6\") " pod="kube-system/kube-proxy-kmxd8"
	Nov 01 11:56:41 old-k8s-version-952358 kubelet[1352]: I1101 11:56:41.274144    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfrbk\" (UniqueName: \"kubernetes.io/projected/5424cb6f-ae01-4a4c-a66d-4c079aef46c6-kube-api-access-zfrbk\") pod \"kube-proxy-kmxd8\" (UID: \"5424cb6f-ae01-4a4c-a66d-4c079aef46c6\") " pod="kube-system/kube-proxy-kmxd8"
	Nov 01 11:56:41 old-k8s-version-952358 kubelet[1352]: I1101 11:56:41.274184    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5424cb6f-ae01-4a4c-a66d-4c079aef46c6-kube-proxy\") pod \"kube-proxy-kmxd8\" (UID: \"5424cb6f-ae01-4a4c-a66d-4c079aef46c6\") " pod="kube-system/kube-proxy-kmxd8"
	Nov 01 11:56:41 old-k8s-version-952358 kubelet[1352]: I1101 11:56:41.274216    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5424cb6f-ae01-4a4c-a66d-4c079aef46c6-lib-modules\") pod \"kube-proxy-kmxd8\" (UID: \"5424cb6f-ae01-4a4c-a66d-4c079aef46c6\") " pod="kube-system/kube-proxy-kmxd8"
	Nov 01 11:56:41 old-k8s-version-952358 kubelet[1352]: I1101 11:56:41.274248    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552a2264-bdd9-4b5f-b48c-369e6eff47aa-lib-modules\") pod \"kindnet-sn7mz\" (UID: \"552a2264-bdd9-4b5f-b48c-369e6eff47aa\") " pod="kube-system/kindnet-sn7mz"
	Nov 01 11:56:42 old-k8s-version-952358 kubelet[1352]: W1101 11:56:42.201418    1352 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/crio-db81024170d2d44018b9ea68fb0a1f2b7c1d16deadced9e32562783f4b107c84 WatchSource:0}: Error finding container db81024170d2d44018b9ea68fb0a1f2b7c1d16deadced9e32562783f4b107c84: Status 404 returned error can't find the container with id db81024170d2d44018b9ea68fb0a1f2b7c1d16deadced9e32562783f4b107c84
	Nov 01 11:56:45 old-k8s-version-952358 kubelet[1352]: I1101 11:56:45.583578    1352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kmxd8" podStartSLOduration=4.583535399 podCreationTimestamp="2025-11-01 11:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:56:42.577050404 +0000 UTC m=+14.317534790" watchObservedRunningTime="2025-11-01 11:56:45.583535399 +0000 UTC m=+17.324019794"
	Nov 01 11:56:55 old-k8s-version-952358 kubelet[1352]: I1101 11:56:55.391390    1352 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 11:56:55 old-k8s-version-952358 kubelet[1352]: I1101 11:56:55.422733    1352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-sn7mz" podStartSLOduration=12.001424466 podCreationTimestamp="2025-11-01 11:56:41 +0000 UTC" firstStartedPulling="2025-11-01 11:56:42.216990891 +0000 UTC m=+13.957475286" lastFinishedPulling="2025-11-01 11:56:44.638251373 +0000 UTC m=+16.378735760" observedRunningTime="2025-11-01 11:56:45.584825098 +0000 UTC m=+17.325309485" watchObservedRunningTime="2025-11-01 11:56:55.42268494 +0000 UTC m=+27.163169335"
	Nov 01 11:56:55 old-k8s-version-952358 kubelet[1352]: I1101 11:56:55.422917    1352 topology_manager.go:215] "Topology Admit Handler" podUID="caedd5ef-fa47-4b4e-b104-945d4b554f7f" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 11:56:55 old-k8s-version-952358 kubelet[1352]: I1101 11:56:55.429762    1352 topology_manager.go:215] "Topology Admit Handler" podUID="5ed95095-99da-4744-9e27-3c17af6a824a" podNamespace="kube-system" podName="coredns-5dd5756b68-pmb27"
	Nov 01 11:56:55 old-k8s-version-952358 kubelet[1352]: I1101 11:56:55.582754    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ed95095-99da-4744-9e27-3c17af6a824a-config-volume\") pod \"coredns-5dd5756b68-pmb27\" (UID: \"5ed95095-99da-4744-9e27-3c17af6a824a\") " pod="kube-system/coredns-5dd5756b68-pmb27"
	Nov 01 11:56:55 old-k8s-version-952358 kubelet[1352]: I1101 11:56:55.582813    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vngh\" (UniqueName: \"kubernetes.io/projected/caedd5ef-fa47-4b4e-b104-945d4b554f7f-kube-api-access-6vngh\") pod \"storage-provisioner\" (UID: \"caedd5ef-fa47-4b4e-b104-945d4b554f7f\") " pod="kube-system/storage-provisioner"
	Nov 01 11:56:55 old-k8s-version-952358 kubelet[1352]: I1101 11:56:55.582839    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/caedd5ef-fa47-4b4e-b104-945d4b554f7f-tmp\") pod \"storage-provisioner\" (UID: \"caedd5ef-fa47-4b4e-b104-945d4b554f7f\") " pod="kube-system/storage-provisioner"
	Nov 01 11:56:55 old-k8s-version-952358 kubelet[1352]: I1101 11:56:55.582866    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24lb6\" (UniqueName: \"kubernetes.io/projected/5ed95095-99da-4744-9e27-3c17af6a824a-kube-api-access-24lb6\") pod \"coredns-5dd5756b68-pmb27\" (UID: \"5ed95095-99da-4744-9e27-3c17af6a824a\") " pod="kube-system/coredns-5dd5756b68-pmb27"
	Nov 01 11:56:55 old-k8s-version-952358 kubelet[1352]: W1101 11:56:55.767359    1352 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/crio-dd5ce2a063ee5dceb1d4364e2de2292b53a9490198468cfa88f37740b3312899 WatchSource:0}: Error finding container dd5ce2a063ee5dceb1d4364e2de2292b53a9490198468cfa88f37740b3312899: Status 404 returned error can't find the container with id dd5ce2a063ee5dceb1d4364e2de2292b53a9490198468cfa88f37740b3312899
	Nov 01 11:56:56 old-k8s-version-952358 kubelet[1352]: I1101 11:56:56.628046    1352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.627999772 podCreationTimestamp="2025-11-01 11:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:56:56.614459848 +0000 UTC m=+28.354944243" watchObservedRunningTime="2025-11-01 11:56:56.627999772 +0000 UTC m=+28.368484183"
	Nov 01 11:56:58 old-k8s-version-952358 kubelet[1352]: I1101 11:56:58.744632    1352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pmb27" podStartSLOduration=17.744570813 podCreationTimestamp="2025-11-01 11:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:56:56.628463441 +0000 UTC m=+28.368947828" watchObservedRunningTime="2025-11-01 11:56:58.744570813 +0000 UTC m=+30.485055208"
	Nov 01 11:56:58 old-k8s-version-952358 kubelet[1352]: I1101 11:56:58.745613    1352 topology_manager.go:215] "Topology Admit Handler" podUID="a2cae1c5-c388-493d-93c1-2ea919b16ea1" podNamespace="default" podName="busybox"
	Nov 01 11:56:58 old-k8s-version-952358 kubelet[1352]: I1101 11:56:58.904185    1352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtrb2\" (UniqueName: \"kubernetes.io/projected/a2cae1c5-c388-493d-93c1-2ea919b16ea1-kube-api-access-gtrb2\") pod \"busybox\" (UID: \"a2cae1c5-c388-493d-93c1-2ea919b16ea1\") " pod="default/busybox"
	Nov 01 11:56:59 old-k8s-version-952358 kubelet[1352]: W1101 11:56:59.067186    1352 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/crio-6184088f974be919024dd61cc8b92c2463bafea2166e11d9c5f7d0f64344bfd6 WatchSource:0}: Error finding container 6184088f974be919024dd61cc8b92c2463bafea2166e11d9c5f7d0f64344bfd6: Status 404 returned error can't find the container with id 6184088f974be919024dd61cc8b92c2463bafea2166e11d9c5f7d0f64344bfd6
	
	
	==> storage-provisioner [0850e5c89fb394e11277e68b6368642cf4e932e5458ac5f0332af8e932ac4de0] <==
	I1101 11:56:55.793864       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 11:56:55.815711       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 11:56:55.815840       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 11:56:55.834117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 11:56:55.834354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-952358_9343f2ff-317e-4b90-800e-0c61037e3a30!
	I1101 11:56:55.835156       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5ce5b6b-8d31-4770-8329-c46e139ecfe3", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-952358_9343f2ff-317e-4b90-800e-0c61037e3a30 became leader
	I1101 11:56:55.935992       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-952358_9343f2ff-317e-4b90-800e-0c61037e3a30!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-952358 -n old-k8s-version-952358
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-952358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-952358 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-952358 --alsologtostderr -v=1: exit status 80 (1.760685678s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-952358 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:58:28.295096  719514 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:58:28.295202  719514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:58:28.295208  719514 out.go:374] Setting ErrFile to fd 2...
	I1101 11:58:28.295213  719514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:58:28.295717  719514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:58:28.296025  719514 out.go:368] Setting JSON to false
	I1101 11:58:28.296080  719514 mustload.go:66] Loading cluster: old-k8s-version-952358
	I1101 11:58:28.296866  719514 config.go:182] Loaded profile config "old-k8s-version-952358": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 11:58:28.300009  719514 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:58:28.322287  719514 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:58:28.322694  719514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:58:28.391367  719514 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 11:58:28.381584867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:58:28.392063  719514 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-952358 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 11:58:28.397970  719514 out.go:179] * Pausing node old-k8s-version-952358 ... 
	I1101 11:58:28.405202  719514 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:58:28.405715  719514 ssh_runner.go:195] Run: systemctl --version
	I1101 11:58:28.405765  719514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:58:28.424461  719514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:58:28.528747  719514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:58:28.541925  719514 pause.go:52] kubelet running: true
	I1101 11:58:28.541993  719514 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 11:58:28.745245  719514 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 11:58:28.745343  719514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 11:58:28.821962  719514 cri.go:89] found id: "095e20aeaa7c4e797b3b245447eb78281f018d603dfb8a3b04200dd8864113ff"
	I1101 11:58:28.821987  719514 cri.go:89] found id: "1673b27ac77a27952be16353ca8f921102c799a4d215bc4be1208247b488b327"
	I1101 11:58:28.821991  719514 cri.go:89] found id: "f807a58e116c1b5abd957b7ad73b4e5c5a22ce7eb21839a6557e000c7c9bc9dc"
	I1101 11:58:28.821995  719514 cri.go:89] found id: "d4b9fdc04889ed0379546b17184bd798a9da3238ce38b3beb28e6b6a07f5d656"
	I1101 11:58:28.821999  719514 cri.go:89] found id: "700b0703d579a0b705abcfec7e4dc2f3e95f8991206525d04be84898e72ba25d"
	I1101 11:58:28.822003  719514 cri.go:89] found id: "9862e16108821021bd8df93bedbfb37c346a536b912b903ad541724af8a95a63"
	I1101 11:58:28.822007  719514 cri.go:89] found id: "aada77cf39436aec3b32621421a714661e50a1fe93ec37e9d9c39d42ba5b50be"
	I1101 11:58:28.822011  719514 cri.go:89] found id: "8f5fc92ea368a4e5105ba5fa2ece7de8be48ea25eaad1c294bbcbf46af48d339"
	I1101 11:58:28.822014  719514 cri.go:89] found id: "bd67e3cf9727246ce5753ba2dc1d2d69471c2daad2ac92a051f96d8686b8be86"
	I1101 11:58:28.822021  719514 cri.go:89] found id: "badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64"
	I1101 11:58:28.822025  719514 cri.go:89] found id: "64a312d9f4c53e1433cc7d19282cc174c8a3ba911a400b3129ce54fe724fd5ba"
	I1101 11:58:28.822028  719514 cri.go:89] found id: ""
	I1101 11:58:28.822077  719514 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 11:58:28.842669  719514 retry.go:31] will retry after 360.301959ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:58:28Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:58:29.203221  719514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:58:29.217893  719514 pause.go:52] kubelet running: false
	I1101 11:58:29.217975  719514 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 11:58:29.390766  719514 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 11:58:29.390861  719514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 11:58:29.458719  719514 cri.go:89] found id: "095e20aeaa7c4e797b3b245447eb78281f018d603dfb8a3b04200dd8864113ff"
	I1101 11:58:29.458741  719514 cri.go:89] found id: "1673b27ac77a27952be16353ca8f921102c799a4d215bc4be1208247b488b327"
	I1101 11:58:29.458747  719514 cri.go:89] found id: "f807a58e116c1b5abd957b7ad73b4e5c5a22ce7eb21839a6557e000c7c9bc9dc"
	I1101 11:58:29.458751  719514 cri.go:89] found id: "d4b9fdc04889ed0379546b17184bd798a9da3238ce38b3beb28e6b6a07f5d656"
	I1101 11:58:29.458754  719514 cri.go:89] found id: "700b0703d579a0b705abcfec7e4dc2f3e95f8991206525d04be84898e72ba25d"
	I1101 11:58:29.458758  719514 cri.go:89] found id: "9862e16108821021bd8df93bedbfb37c346a536b912b903ad541724af8a95a63"
	I1101 11:58:29.458761  719514 cri.go:89] found id: "aada77cf39436aec3b32621421a714661e50a1fe93ec37e9d9c39d42ba5b50be"
	I1101 11:58:29.458777  719514 cri.go:89] found id: "8f5fc92ea368a4e5105ba5fa2ece7de8be48ea25eaad1c294bbcbf46af48d339"
	I1101 11:58:29.458787  719514 cri.go:89] found id: "bd67e3cf9727246ce5753ba2dc1d2d69471c2daad2ac92a051f96d8686b8be86"
	I1101 11:58:29.458794  719514 cri.go:89] found id: "badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64"
	I1101 11:58:29.458797  719514 cri.go:89] found id: "64a312d9f4c53e1433cc7d19282cc174c8a3ba911a400b3129ce54fe724fd5ba"
	I1101 11:58:29.458800  719514 cri.go:89] found id: ""
	I1101 11:58:29.458848  719514 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 11:58:29.469771  719514 retry.go:31] will retry after 227.831982ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:58:29Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:58:29.698295  719514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:58:29.711450  719514 pause.go:52] kubelet running: false
	I1101 11:58:29.711559  719514 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 11:58:29.884026  719514 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 11:58:29.884152  719514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 11:58:29.955563  719514 cri.go:89] found id: "095e20aeaa7c4e797b3b245447eb78281f018d603dfb8a3b04200dd8864113ff"
	I1101 11:58:29.955590  719514 cri.go:89] found id: "1673b27ac77a27952be16353ca8f921102c799a4d215bc4be1208247b488b327"
	I1101 11:58:29.955596  719514 cri.go:89] found id: "f807a58e116c1b5abd957b7ad73b4e5c5a22ce7eb21839a6557e000c7c9bc9dc"
	I1101 11:58:29.955601  719514 cri.go:89] found id: "d4b9fdc04889ed0379546b17184bd798a9da3238ce38b3beb28e6b6a07f5d656"
	I1101 11:58:29.955604  719514 cri.go:89] found id: "700b0703d579a0b705abcfec7e4dc2f3e95f8991206525d04be84898e72ba25d"
	I1101 11:58:29.955608  719514 cri.go:89] found id: "9862e16108821021bd8df93bedbfb37c346a536b912b903ad541724af8a95a63"
	I1101 11:58:29.955612  719514 cri.go:89] found id: "aada77cf39436aec3b32621421a714661e50a1fe93ec37e9d9c39d42ba5b50be"
	I1101 11:58:29.955646  719514 cri.go:89] found id: "8f5fc92ea368a4e5105ba5fa2ece7de8be48ea25eaad1c294bbcbf46af48d339"
	I1101 11:58:29.955658  719514 cri.go:89] found id: "bd67e3cf9727246ce5753ba2dc1d2d69471c2daad2ac92a051f96d8686b8be86"
	I1101 11:58:29.955666  719514 cri.go:89] found id: "badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64"
	I1101 11:58:29.955670  719514 cri.go:89] found id: "64a312d9f4c53e1433cc7d19282cc174c8a3ba911a400b3129ce54fe724fd5ba"
	I1101 11:58:29.955673  719514 cri.go:89] found id: ""
	I1101 11:58:29.955738  719514 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 11:58:29.972638  719514 out.go:203] 
	W1101 11:58:29.976053  719514 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:58:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:58:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 11:58:29.976074  719514 out.go:285] * 
	* 
	W1101 11:58:29.984010  719514 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 11:58:29.987271  719514 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-952358 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-952358
helpers_test.go:243: (dbg) docker inspect old-k8s-version-952358:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431",
	        "Created": "2025-11-01T11:56:05.046595205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717405,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:57:23.149091417Z",
	            "FinishedAt": "2025-11-01T11:57:22.313603122Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/hostname",
	        "HostsPath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/hosts",
	        "LogPath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431-json.log",
	        "Name": "/old-k8s-version-952358",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-952358:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-952358",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431",
	                "LowerDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-952358",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-952358/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-952358",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-952358",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-952358",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d13165a107ed345cbab6d0976f57f64674b6043e76b959752e7a98e2d1cdd11",
	            "SandboxKey": "/var/run/docker/netns/9d13165a107e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33780"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33781"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33782"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-952358": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:73:c8:84:1b:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c9bca57e57ae79fd54c9c7ebc4412107912a1f60b0190f08a0287f153c5cacff",
	                    "EndpointID": "2ab55e7250bdb9385315d6bf20f7694fa9e4abe7ea5b6729fc60a405e6cadb98",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-952358",
	                        "5af3c19b6c57"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-952358 -n old-k8s-version-952358
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-952358 -n old-k8s-version-952358: exit status 2 (404.867351ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-952358 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-952358 logs -n 25: (1.356929843s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-507511 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo containerd config dump                                                                                                                                                                                                  │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo crio config                                                                                                                                                                                                             │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ delete  │ -p cilium-507511                                                                                                                                                                                                                              │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p force-systemd-env-857548 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-857548  │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ force-systemd-flag-643844 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-643844 │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ delete  │ -p force-systemd-flag-643844                                                                                                                                                                                                                  │ force-systemd-flag-643844 │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-534694    │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p force-systemd-env-857548                                                                                                                                                                                                                   │ force-systemd-env-857548  │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p cert-options-505831 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ cert-options-505831 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ -p cert-options-505831 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p cert-options-505831                                                                                                                                                                                                                        │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │                     │
	│ stop    │ -p old-k8s-version-952358 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-952358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:58 UTC │
	│ image   │ old-k8s-version-952358 image list --format=json                                                                                                                                                                                               │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ pause   │ -p old-k8s-version-952358 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:57:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:57:22.879638  717279 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:57:22.879752  717279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:57:22.879763  717279 out.go:374] Setting ErrFile to fd 2...
	I1101 11:57:22.879769  717279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:57:22.880047  717279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:57:22.880405  717279 out.go:368] Setting JSON to false
	I1101 11:57:22.881310  717279 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13192,"bootTime":1761985051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:57:22.881377  717279 start.go:143] virtualization:  
	I1101 11:57:22.884444  717279 out.go:179] * [old-k8s-version-952358] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:57:22.888557  717279 notify.go:221] Checking for updates...
	I1101 11:57:22.888527  717279 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:57:22.892344  717279 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:57:22.895242  717279 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:57:22.898140  717279 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:57:22.901131  717279 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:57:22.904071  717279 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:57:22.907375  717279 config.go:182] Loaded profile config "old-k8s-version-952358": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 11:57:22.910785  717279 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 11:57:22.913725  717279 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:57:22.938749  717279 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:57:22.938872  717279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:57:22.994226  717279 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:57:22.984313991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:57:22.994335  717279 docker.go:319] overlay module found
	I1101 11:57:22.997374  717279 out.go:179] * Using the docker driver based on existing profile
	I1101 11:57:23.000129  717279 start.go:309] selected driver: docker
	I1101 11:57:23.000149  717279 start.go:930] validating driver "docker" against &{Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:57:23.000255  717279 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:57:23.001001  717279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:57:23.064831  717279 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:57:23.055780685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:57:23.065196  717279 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:57:23.065228  717279 cni.go:84] Creating CNI manager for ""
	I1101 11:57:23.065285  717279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:57:23.065323  717279 start.go:353] cluster config:
	{Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:57:23.068574  717279 out.go:179] * Starting "old-k8s-version-952358" primary control-plane node in "old-k8s-version-952358" cluster
	I1101 11:57:23.071366  717279 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:57:23.074306  717279 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:57:23.077080  717279 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 11:57:23.077152  717279 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 11:57:23.077155  717279 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:57:23.077167  717279 cache.go:59] Caching tarball of preloaded images
	I1101 11:57:23.077265  717279 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:57:23.077275  717279 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 11:57:23.077385  717279 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/config.json ...
	I1101 11:57:23.095733  717279 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:57:23.095759  717279 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:57:23.095778  717279 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:57:23.095802  717279 start.go:360] acquireMachinesLock for old-k8s-version-952358: {Name:mk5b8de3b8dc99aca4b3c9de9389ab7eb20d4d78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:57:23.095864  717279 start.go:364] duration metric: took 35.643µs to acquireMachinesLock for "old-k8s-version-952358"
	I1101 11:57:23.095888  717279 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:57:23.095893  717279 fix.go:54] fixHost starting: 
	I1101 11:57:23.096157  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:23.113545  717279 fix.go:112] recreateIfNeeded on old-k8s-version-952358: state=Stopped err=<nil>
	W1101 11:57:23.113590  717279 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:57:23.116726  717279 out.go:252] * Restarting existing docker container for "old-k8s-version-952358" ...
	I1101 11:57:23.116810  717279 cli_runner.go:164] Run: docker start old-k8s-version-952358
	I1101 11:57:23.388246  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:23.411074  717279 kic.go:430] container "old-k8s-version-952358" state is running.
	I1101 11:57:23.411445  717279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-952358
	I1101 11:57:23.435622  717279 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/config.json ...
	I1101 11:57:23.435850  717279 machine.go:94] provisionDockerMachine start ...
	I1101 11:57:23.435921  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:23.456844  717279 main.go:143] libmachine: Using SSH client type: native
	I1101 11:57:23.457166  717279 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33780 <nil> <nil>}
	I1101 11:57:23.457186  717279 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:57:23.458085  717279 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 11:57:26.606427  717279 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-952358
	
	I1101 11:57:26.606462  717279 ubuntu.go:182] provisioning hostname "old-k8s-version-952358"
	I1101 11:57:26.606527  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:26.625682  717279 main.go:143] libmachine: Using SSH client type: native
	I1101 11:57:26.626138  717279 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33780 <nil> <nil>}
	I1101 11:57:26.626161  717279 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-952358 && echo "old-k8s-version-952358" | sudo tee /etc/hostname
	I1101 11:57:26.787705  717279 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-952358
	
	I1101 11:57:26.787832  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:26.807628  717279 main.go:143] libmachine: Using SSH client type: native
	I1101 11:57:26.807947  717279 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33780 <nil> <nil>}
	I1101 11:57:26.807970  717279 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-952358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-952358/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-952358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:57:26.957856  717279 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:57:26.957880  717279 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:57:26.957907  717279 ubuntu.go:190] setting up certificates
	I1101 11:57:26.957923  717279 provision.go:84] configureAuth start
	I1101 11:57:26.957988  717279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-952358
	I1101 11:57:26.974936  717279 provision.go:143] copyHostCerts
	I1101 11:57:26.975005  717279 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:57:26.975029  717279 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:57:26.975119  717279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:57:26.975282  717279 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:57:26.975294  717279 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:57:26.975324  717279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:57:26.975394  717279 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:57:26.975403  717279 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:57:26.975431  717279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:57:26.975489  717279 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-952358 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-952358]
	I1101 11:57:27.278768  717279 provision.go:177] copyRemoteCerts
	I1101 11:57:27.278840  717279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:57:27.278887  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:27.297517  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:27.405501  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:57:27.428391  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 11:57:27.448527  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:57:27.468541  717279 provision.go:87] duration metric: took 510.601361ms to configureAuth
	I1101 11:57:27.468609  717279 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:57:27.468812  717279 config.go:182] Loaded profile config "old-k8s-version-952358": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 11:57:27.468925  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:27.486525  717279 main.go:143] libmachine: Using SSH client type: native
	I1101 11:57:27.486832  717279 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33780 <nil> <nil>}
	I1101 11:57:27.486851  717279 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:57:27.810256  717279 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:57:27.810281  717279 machine.go:97] duration metric: took 4.374415217s to provisionDockerMachine
	I1101 11:57:27.810292  717279 start.go:293] postStartSetup for "old-k8s-version-952358" (driver="docker")
	I1101 11:57:27.810320  717279 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:57:27.810405  717279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:57:27.810465  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:27.829019  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:27.933510  717279 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:57:27.936906  717279 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:57:27.936936  717279 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:57:27.936948  717279 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:57:27.937000  717279 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:57:27.937087  717279 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:57:27.937209  717279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:57:27.944680  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:57:27.963444  717279 start.go:296] duration metric: took 153.137505ms for postStartSetup
	I1101 11:57:27.963603  717279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:57:27.963667  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:27.983822  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:28.087108  717279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:57:28.092073  717279 fix.go:56] duration metric: took 4.996171668s for fixHost
	I1101 11:57:28.092100  717279 start.go:83] releasing machines lock for "old-k8s-version-952358", held for 4.996222229s
	I1101 11:57:28.092170  717279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-952358
	I1101 11:57:28.109517  717279 ssh_runner.go:195] Run: cat /version.json
	I1101 11:57:28.109556  717279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:57:28.109582  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:28.109620  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:28.135446  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:28.135969  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:28.237324  717279 ssh_runner.go:195] Run: systemctl --version
	I1101 11:57:28.328606  717279 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:57:28.366007  717279 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:57:28.370661  717279 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:57:28.370752  717279 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:57:28.378781  717279 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 11:57:28.378816  717279 start.go:496] detecting cgroup driver to use...
	I1101 11:57:28.378866  717279 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:57:28.378938  717279 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:57:28.394033  717279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:57:28.407069  717279 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:57:28.407164  717279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:57:28.422477  717279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:57:28.436222  717279 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:57:28.555521  717279 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:57:28.686703  717279 docker.go:234] disabling docker service ...
	I1101 11:57:28.686821  717279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:57:28.701458  717279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:57:28.714698  717279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:57:28.842367  717279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:57:28.958762  717279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:57:28.972182  717279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:57:28.987296  717279 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 11:57:28.987413  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:28.996677  717279 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:57:28.996746  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.007561  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.017431  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.026797  717279 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:57:29.041845  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.050846  717279 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.059453  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.068225  717279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:57:29.076123  717279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:57:29.083753  717279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:57:29.202519  717279 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:57:29.345609  717279 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:57:29.345823  717279 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:57:29.350295  717279 start.go:564] Will wait 60s for crictl version
	I1101 11:57:29.350410  717279 ssh_runner.go:195] Run: which crictl
	I1101 11:57:29.354420  717279 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:57:29.384270  717279 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:57:29.384436  717279 ssh_runner.go:195] Run: crio --version
	I1101 11:57:29.421338  717279 ssh_runner.go:195] Run: crio --version
	I1101 11:57:29.454507  717279 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 11:57:29.457364  717279 cli_runner.go:164] Run: docker network inspect old-k8s-version-952358 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:57:29.474232  717279 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 11:57:29.478007  717279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:57:29.488782  717279 kubeadm.go:884] updating cluster {Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:57:29.488894  717279 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 11:57:29.488950  717279 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:57:29.523452  717279 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:57:29.523479  717279 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:57:29.523537  717279 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:57:29.556013  717279 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:57:29.556037  717279 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:57:29.556046  717279 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1101 11:57:29.556153  717279 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-952358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:57:29.556251  717279 ssh_runner.go:195] Run: crio config
	I1101 11:57:29.634445  717279 cni.go:84] Creating CNI manager for ""
	I1101 11:57:29.634470  717279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:57:29.634484  717279 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:57:29.634507  717279 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-952358 NodeName:old-k8s-version-952358 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:57:29.634654  717279 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-952358"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:57:29.634732  717279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 11:57:29.642773  717279 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:57:29.642852  717279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:57:29.650751  717279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 11:57:29.664644  717279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:57:29.679356  717279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 11:57:29.692567  717279 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:57:29.697030  717279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:57:29.706728  717279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:57:29.824660  717279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:57:29.842238  717279 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358 for IP: 192.168.85.2
	I1101 11:57:29.842308  717279 certs.go:195] generating shared ca certs ...
	I1101 11:57:29.842340  717279 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:57:29.842523  717279 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:57:29.842598  717279 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:57:29.842622  717279 certs.go:257] generating profile certs ...
	I1101 11:57:29.842748  717279 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.key
	I1101 11:57:29.842845  717279 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key.1ce2c540
	I1101 11:57:29.842919  717279 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.key
	I1101 11:57:29.843055  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:57:29.843115  717279 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:57:29.843140  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:57:29.843187  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:57:29.843242  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:57:29.843297  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:57:29.843369  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:57:29.843986  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:57:29.867255  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:57:29.890600  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:57:29.921077  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:57:29.946621  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 11:57:29.973520  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:57:29.997365  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:57:30.039029  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:57:30.064879  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:57:30.099222  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:57:30.132835  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:57:30.163610  717279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:57:30.180031  717279 ssh_runner.go:195] Run: openssl version
	I1101 11:57:30.189394  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:57:30.199169  717279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:57:30.203537  717279 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:57:30.203666  717279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:57:30.246439  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:57:30.255335  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:57:30.264484  717279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:57:30.268418  717279 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:57:30.268488  717279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:57:30.310721  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:57:30.318881  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:57:30.327417  717279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:57:30.331203  717279 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:57:30.331333  717279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:57:30.373107  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:57:30.381271  717279 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:57:30.385229  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:57:30.426908  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:57:30.477440  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:57:30.526485  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:57:30.587441  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:57:30.662155  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:57:30.777108  717279 kubeadm.go:401] StartCluster: {Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:57:30.777222  717279 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:57:30.777289  717279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:57:30.842798  717279 cri.go:89] found id: "9862e16108821021bd8df93bedbfb37c346a536b912b903ad541724af8a95a63"
	I1101 11:57:30.842821  717279 cri.go:89] found id: "aada77cf39436aec3b32621421a714661e50a1fe93ec37e9d9c39d42ba5b50be"
	I1101 11:57:30.842826  717279 cri.go:89] found id: "8f5fc92ea368a4e5105ba5fa2ece7de8be48ea25eaad1c294bbcbf46af48d339"
	I1101 11:57:30.842838  717279 cri.go:89] found id: "bd67e3cf9727246ce5753ba2dc1d2d69471c2daad2ac92a051f96d8686b8be86"
	I1101 11:57:30.842843  717279 cri.go:89] found id: ""
	I1101 11:57:30.842893  717279 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 11:57:30.862324  717279 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:57:30Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:57:30.862417  717279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:57:30.876163  717279 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:57:30.876187  717279 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:57:30.876242  717279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:57:30.886217  717279 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:57:30.886859  717279 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-952358" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:57:30.887146  717279 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-952358" cluster setting kubeconfig missing "old-k8s-version-952358" context setting]
	I1101 11:57:30.887622  717279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:57:30.889371  717279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:57:30.901872  717279 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 11:57:30.901907  717279 kubeadm.go:602] duration metric: took 25.713579ms to restartPrimaryControlPlane
	I1101 11:57:30.901917  717279 kubeadm.go:403] duration metric: took 124.820734ms to StartCluster
	I1101 11:57:30.901932  717279 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:57:30.902001  717279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:57:30.902848  717279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:57:30.903045  717279 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:57:30.903430  717279 config.go:182] Loaded profile config "old-k8s-version-952358": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 11:57:30.903423  717279 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:57:30.903510  717279 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-952358"
	I1101 11:57:30.903525  717279 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-952358"
	W1101 11:57:30.903531  717279 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:57:30.903556  717279 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:57:30.904013  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:30.904174  717279 addons.go:70] Setting dashboard=true in profile "old-k8s-version-952358"
	I1101 11:57:30.904186  717279 addons.go:239] Setting addon dashboard=true in "old-k8s-version-952358"
	W1101 11:57:30.904192  717279 addons.go:248] addon dashboard should already be in state true
	I1101 11:57:30.904212  717279 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:57:30.904516  717279 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-952358"
	I1101 11:57:30.904534  717279 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-952358"
	I1101 11:57:30.904588  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:30.904821  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:30.908895  717279 out.go:179] * Verifying Kubernetes components...
	I1101 11:57:30.914312  717279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:57:30.972925  717279 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:57:30.972925  717279 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:57:30.977010  717279 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:57:30.977034  717279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:57:30.977102  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:30.980433  717279 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:57:30.983296  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:57:30.983321  717279 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:57:30.983405  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:30.989086  717279 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-952358"
	W1101 11:57:30.989110  717279 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:57:30.989137  717279 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:57:30.989557  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:31.031988  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:31.056993  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:31.059745  717279 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:57:31.059766  717279 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:57:31.059830  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:31.086698  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:31.266532  717279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:57:31.292234  717279 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-952358" to be "Ready" ...
	I1101 11:57:31.298972  717279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:57:31.339652  717279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:57:31.403704  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:57:31.403768  717279 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:57:31.450821  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:57:31.450894  717279 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:57:31.503548  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:57:31.503619  717279 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:57:31.571455  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:57:31.571516  717279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:57:31.645620  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:57:31.645708  717279 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:57:31.677942  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:57:31.678018  717279 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:57:31.715886  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:57:31.715959  717279 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:57:31.740414  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:57:31.740491  717279 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:57:31.767443  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:57:31.767517  717279 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:57:31.791195  717279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:57:34.755446  717279 node_ready.go:49] node "old-k8s-version-952358" is "Ready"
	I1101 11:57:34.755472  717279 node_ready.go:38] duration metric: took 3.463154815s for node "old-k8s-version-952358" to be "Ready" ...
	I1101 11:57:34.755484  717279 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:57:34.755544  717279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:57:35.803404  717279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.50435892s)
	I1101 11:57:36.338141  717279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.998408257s)
	I1101 11:57:36.842635  717279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.05134942s)
	I1101 11:57:36.842904  717279 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.087347137s)
	I1101 11:57:36.842925  717279 api_server.go:72] duration metric: took 5.939854268s to wait for apiserver process to appear ...
	I1101 11:57:36.842931  717279 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:57:36.842950  717279 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 11:57:36.845979  717279 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-952358 addons enable metrics-server
	
	I1101 11:57:36.848865  717279 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1101 11:57:36.851715  717279 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 11:57:36.852058  717279 addons.go:515] duration metric: took 5.948637454s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1101 11:57:36.853097  717279 api_server.go:141] control plane version: v1.28.0
	I1101 11:57:36.853121  717279 api_server.go:131] duration metric: took 10.184402ms to wait for apiserver health ...
	I1101 11:57:36.853131  717279 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:57:36.856789  717279 system_pods.go:59] 8 kube-system pods found
	I1101 11:57:36.856822  717279 system_pods.go:61] "coredns-5dd5756b68-pmb27" [5ed95095-99da-4744-9e27-3c17af6a824a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:57:36.856838  717279 system_pods.go:61] "etcd-old-k8s-version-952358" [47a39b81-001d-4c6f-8c0d-c5f3f4785421] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:57:36.856844  717279 system_pods.go:61] "kindnet-sn7mz" [552a2264-bdd9-4b5f-b48c-369e6eff47aa] Running
	I1101 11:57:36.856852  717279 system_pods.go:61] "kube-apiserver-old-k8s-version-952358" [e51ba789-bf75-410a-95f8-3d02157e11b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:57:36.856865  717279 system_pods.go:61] "kube-controller-manager-old-k8s-version-952358" [e54caac4-1422-4a20-9dbb-fbceea3bc4db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:57:36.856871  717279 system_pods.go:61] "kube-proxy-kmxd8" [5424cb6f-ae01-4a4c-a66d-4c079aef46c6] Running
	I1101 11:57:36.856880  717279 system_pods.go:61] "kube-scheduler-old-k8s-version-952358" [4e5fe046-ae08-40a7-825e-fa77da451c18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:57:36.856893  717279 system_pods.go:61] "storage-provisioner" [caedd5ef-fa47-4b4e-b104-945d4b554f7f] Running
	I1101 11:57:36.856899  717279 system_pods.go:74] duration metric: took 3.762969ms to wait for pod list to return data ...
	I1101 11:57:36.856918  717279 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:57:36.865921  717279 default_sa.go:45] found service account: "default"
	I1101 11:57:36.865949  717279 default_sa.go:55] duration metric: took 9.02428ms for default service account to be created ...
	I1101 11:57:36.865959  717279 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:57:36.869213  717279 system_pods.go:86] 8 kube-system pods found
	I1101 11:57:36.869245  717279 system_pods.go:89] "coredns-5dd5756b68-pmb27" [5ed95095-99da-4744-9e27-3c17af6a824a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:57:36.869254  717279 system_pods.go:89] "etcd-old-k8s-version-952358" [47a39b81-001d-4c6f-8c0d-c5f3f4785421] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:57:36.869261  717279 system_pods.go:89] "kindnet-sn7mz" [552a2264-bdd9-4b5f-b48c-369e6eff47aa] Running
	I1101 11:57:36.869268  717279 system_pods.go:89] "kube-apiserver-old-k8s-version-952358" [e51ba789-bf75-410a-95f8-3d02157e11b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:57:36.869280  717279 system_pods.go:89] "kube-controller-manager-old-k8s-version-952358" [e54caac4-1422-4a20-9dbb-fbceea3bc4db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:57:36.869290  717279 system_pods.go:89] "kube-proxy-kmxd8" [5424cb6f-ae01-4a4c-a66d-4c079aef46c6] Running
	I1101 11:57:36.869296  717279 system_pods.go:89] "kube-scheduler-old-k8s-version-952358" [4e5fe046-ae08-40a7-825e-fa77da451c18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:57:36.869304  717279 system_pods.go:89] "storage-provisioner" [caedd5ef-fa47-4b4e-b104-945d4b554f7f] Running
	I1101 11:57:36.869313  717279 system_pods.go:126] duration metric: took 3.347752ms to wait for k8s-apps to be running ...
	I1101 11:57:36.869325  717279 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:57:36.869384  717279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:57:36.909778  717279 system_svc.go:56] duration metric: took 40.442761ms WaitForService to wait for kubelet
	I1101 11:57:36.909811  717279 kubeadm.go:587] duration metric: took 6.006738724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:57:36.909832  717279 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:57:36.912603  717279 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:57:36.912636  717279 node_conditions.go:123] node cpu capacity is 2
	I1101 11:57:36.912648  717279 node_conditions.go:105] duration metric: took 2.810785ms to run NodePressure ...
	I1101 11:57:36.912661  717279 start.go:242] waiting for startup goroutines ...
	I1101 11:57:36.912668  717279 start.go:247] waiting for cluster config update ...
	I1101 11:57:36.912679  717279 start.go:256] writing updated cluster config ...
	I1101 11:57:36.912965  717279 ssh_runner.go:195] Run: rm -f paused
	I1101 11:57:36.917383  717279 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:57:36.921775  717279 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pmb27" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:57:38.927346  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:40.933149  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:43.428077  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:45.927351  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:47.928130  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:49.929126  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:52.428270  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:54.428379  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:56.428431  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:58.928926  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:01.427902  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:03.428572  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:05.927537  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:08.427589  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:10.928139  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:13.427849  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	I1101 11:58:14.934614  717279 pod_ready.go:94] pod "coredns-5dd5756b68-pmb27" is "Ready"
	I1101 11:58:14.934645  717279 pod_ready.go:86] duration metric: took 38.012840589s for pod "coredns-5dd5756b68-pmb27" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.938147  717279 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.944885  717279 pod_ready.go:94] pod "etcd-old-k8s-version-952358" is "Ready"
	I1101 11:58:14.944915  717279 pod_ready.go:86] duration metric: took 6.743693ms for pod "etcd-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.948129  717279 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.954327  717279 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-952358" is "Ready"
	I1101 11:58:14.954357  717279 pod_ready.go:86] duration metric: took 6.196872ms for pod "kube-apiserver-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.957620  717279 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:15.126644  717279 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-952358" is "Ready"
	I1101 11:58:15.126676  717279 pod_ready.go:86] duration metric: took 169.030593ms for pod "kube-controller-manager-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:15.327007  717279 pod_ready.go:83] waiting for pod "kube-proxy-kmxd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:15.725731  717279 pod_ready.go:94] pod "kube-proxy-kmxd8" is "Ready"
	I1101 11:58:15.725758  717279 pod_ready.go:86] duration metric: took 398.677067ms for pod "kube-proxy-kmxd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:15.926436  717279 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:16.325895  717279 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-952358" is "Ready"
	I1101 11:58:16.325922  717279 pod_ready.go:86] duration metric: took 399.461938ms for pod "kube-scheduler-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:16.325935  717279 pod_ready.go:40] duration metric: took 39.408517431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:58:16.384975  717279 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 11:58:16.388046  717279 out.go:203] 
	W1101 11:58:16.390923  717279 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 11:58:16.393734  717279 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 11:58:16.396578  717279 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-952358" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.023143659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.030726362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.032034719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.054225676Z" level=info msg="Created container badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd/dashboard-metrics-scraper" id=db063886-4209-4d68-9164-fdbfcde2091e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.057407961Z" level=info msg="Starting container: badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64" id=36757657-5124-4d48-a221-d58ea2ada9b6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.062519182Z" level=info msg="Started container" PID=1643 containerID=badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd/dashboard-metrics-scraper id=36757657-5124-4d48-a221-d58ea2ada9b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=edbd8683ccecc993aaab2a065bafebce5b4d2335d3b4990d14de07f322e71914
	Nov 01 11:58:12 old-k8s-version-952358 conmon[1641]: conmon badf1228f71bd4d5c2c3 <ninfo>: container 1643 exited with status 1
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.300113597Z" level=info msg="Removing container: 3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58" id=dc897409-ef04-4b33-98b1-527b76a45612 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.310324043Z" level=info msg="Error loading conmon cgroup of container 3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58: cgroup deleted" id=dc897409-ef04-4b33-98b1-527b76a45612 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.313497761Z" level=info msg="Removed container 3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd/dashboard-metrics-scraper" id=dc897409-ef04-4b33-98b1-527b76a45612 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.841116469Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.845812563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.845849174Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.845873839Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.849483108Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.849520623Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.849544098Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.853083779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.853120193Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.853145285Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.856392359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.856429003Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.856454333Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.859818782Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.859853458Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	badf1228f71bd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   edbd8683ccecc       dashboard-metrics-scraper-5f989dc9cf-xn7cd       kubernetes-dashboard
	095e20aeaa7c4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   7e79d97623751       storage-provisioner                              kube-system
	64a312d9f4c53       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   73c58832ed164       kubernetes-dashboard-8694d4445c-nhfb8            kubernetes-dashboard
	f088bfba05aee       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   0a99d8b18ef3e       busybox                                          default
	1673b27ac77a2       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   f0a9201055483       coredns-5dd5756b68-pmb27                         kube-system
	f807a58e116c1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   7e79d97623751       storage-provisioner                              kube-system
	d4b9fdc04889e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   4d64e0f152004       kindnet-sn7mz                                    kube-system
	700b0703d579a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   4d9affad8b0ea       kube-proxy-kmxd8                                 kube-system
	9862e16108821       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   28543c92fa70b       etcd-old-k8s-version-952358                      kube-system
	aada77cf39436       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   7b1a4e0209855       kube-scheduler-old-k8s-version-952358            kube-system
	8f5fc92ea368a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   2494a3565646a       kube-apiserver-old-k8s-version-952358            kube-system
	bd67e3cf97272       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   0c3b8ff5c7d58       kube-controller-manager-old-k8s-version-952358   kube-system
	
	
	==> coredns [1673b27ac77a27952be16353ca8f921102c799a4d215bc4be1208247b488b327] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57697 - 29279 "HINFO IN 7631426333321470715.1191036699852579226. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026269705s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-952358
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-952358
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=old-k8s-version-952358
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_56_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-952358
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:58:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:58:05 +0000   Sat, 01 Nov 2025 11:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:58:05 +0000   Sat, 01 Nov 2025 11:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:58:05 +0000   Sat, 01 Nov 2025 11:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:58:05 +0000   Sat, 01 Nov 2025 11:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-952358
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                dbeefb29-03d1-48b6-93d2-8db0a71a3a9e
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-pmb27                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-952358                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-sn7mz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-952358             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-952358    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-kmxd8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-952358             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-xn7cd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-nhfb8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-952358 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node old-k8s-version-952358 event: Registered Node old-k8s-version-952358 in Controller
	  Normal  NodeReady                96s                kubelet          Node old-k8s-version-952358 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node old-k8s-version-952358 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-952358 event: Registered Node old-k8s-version-952358 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:34] overlayfs: idmapped layers are currently not supported
	[ +35.784283] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9862e16108821021bd8df93bedbfb37c346a536b912b903ad541724af8a95a63] <==
	{"level":"info","ts":"2025-11-01T11:57:30.895422Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T11:57:30.895488Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T11:57:30.903342Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9f0758e1c58a86ed","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-01T11:57:30.911515Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T11:57:30.919308Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T11:57:30.919405Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T11:57:30.911554Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T11:57:30.921743Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T11:57:30.911786Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T11:57:30.921836Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T11:57:30.921875Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T11:57:31.071394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T11:57:31.071453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T11:57:31.071492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T11:57:31.071506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T11:57:31.071512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T11:57:31.071528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-01T11:57:31.071537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T11:57:31.082253Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-952358 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T11:57:31.082295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:57:31.083655Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-01T11:57:31.082306Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:57:31.102356Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T11:57:31.173104Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T11:57:31.173145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:58:31 up  3:41,  0 user,  load average: 1.58, 2.86, 2.52
	Linux old-k8s-version-952358 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d4b9fdc04889ed0379546b17184bd798a9da3238ce38b3beb28e6b6a07f5d656] <==
	I1101 11:57:35.621977       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 11:57:35.622381       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 11:57:35.622549       1 main.go:148] setting mtu 1500 for CNI 
	I1101 11:57:35.622590       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 11:57:35.622629       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T11:57:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 11:57:35.839470       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 11:57:35.839488       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 11:57:35.839497       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 11:57:35.839773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 11:58:05.840382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 11:58:05.840382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 11:58:05.840506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 11:58:05.840602       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 11:58:07.140607       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 11:58:07.140635       1 metrics.go:72] Registering metrics
	I1101 11:58:07.140711       1 controller.go:711] "Syncing nftables rules"
	I1101 11:58:15.839551       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 11:58:15.840806       1 main.go:301] handling current node
	I1101 11:58:25.843971       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 11:58:25.844036       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f5fc92ea368a4e5105ba5fa2ece7de8be48ea25eaad1c294bbcbf46af48d339] <==
	I1101 11:57:34.777612       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 11:57:34.778925       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 11:57:34.778950       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1101 11:57:34.792650       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	I1101 11:57:34.842113       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 11:57:34.855104       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 11:57:34.879667       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1101 11:57:34.888069       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 11:57:34.892925       1 cache.go:39] Caches are synced for autoregister controller
	I1101 11:57:35.453186       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 11:57:36.646031       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 11:57:36.697550       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 11:57:36.725849       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:57:36.739731       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:57:36.763172       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 11:57:36.817550       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.192.249"}
	I1101 11:57:36.834314       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.102.94"}
	E1101 11:57:44.780374       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	I1101 11:57:47.472361       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 11:57:47.553832       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 11:57:47.632845       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1101 11:57:54.781139       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1101 11:58:04.785386       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E1101 11:58:14.786735       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1101 11:58:24.787641       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [bd67e3cf9727246ce5753ba2dc1d2d69471c2daad2ac92a051f96d8686b8be86] <==
	I1101 11:57:47.498518       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-xn7cd"
	I1101 11:57:47.508766       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-nhfb8"
	I1101 11:57:47.538013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.079929ms"
	I1101 11:57:47.554681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.628868ms"
	I1101 11:57:47.576312       1 event.go:307] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I1101 11:57:47.576746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.007135ms"
	I1101 11:57:47.577535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.609µs"
	I1101 11:57:47.585600       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1101 11:57:47.585743       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1101 11:57:47.613440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.359822ms"
	I1101 11:57:47.614145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.165µs"
	I1101 11:57:47.643110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.547573ms"
	I1101 11:57:47.643295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.084µs"
	I1101 11:57:47.679836       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 11:57:47.699454       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 11:57:47.699484       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 11:57:54.224415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="5.416275ms"
	I1101 11:57:55.215785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.753µs"
	I1101 11:57:56.224771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.517µs"
	I1101 11:57:59.247552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.650656ms"
	I1101 11:57:59.247638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.023µs"
	I1101 11:58:12.324468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.035µs"
	I1101 11:58:14.615059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.441763ms"
	I1101 11:58:14.615191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.328µs"
	I1101 11:58:19.633968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.209µs"
	
	
	==> kube-proxy [700b0703d579a0b705abcfec7e4dc2f3e95f8991206525d04be84898e72ba25d] <==
	I1101 11:57:35.692948       1 server_others.go:69] "Using iptables proxy"
	I1101 11:57:35.720270       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1101 11:57:35.934396       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:57:35.936506       1 server_others.go:152] "Using iptables Proxier"
	I1101 11:57:35.936549       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 11:57:35.936557       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 11:57:35.936582       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 11:57:35.936778       1 server.go:846] "Version info" version="v1.28.0"
	I1101 11:57:35.936809       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:57:35.943555       1 config.go:188] "Starting service config controller"
	I1101 11:57:35.943577       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 11:57:35.943593       1 config.go:97] "Starting endpoint slice config controller"
	I1101 11:57:35.943597       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 11:57:35.943989       1 config.go:315] "Starting node config controller"
	I1101 11:57:35.943996       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 11:57:36.049796       1 shared_informer.go:318] Caches are synced for node config
	I1101 11:57:36.049842       1 shared_informer.go:318] Caches are synced for service config
	I1101 11:57:36.049882       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [aada77cf39436aec3b32621421a714661e50a1fe93ec37e9d9c39d42ba5b50be] <==
	I1101 11:57:32.899184       1 serving.go:348] Generated self-signed cert in-memory
	W1101 11:57:34.678166       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 11:57:34.678198       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 11:57:34.678208       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 11:57:34.678217       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 11:57:34.815954       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 11:57:34.816056       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:57:34.820984       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:57:34.821092       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 11:57:34.821846       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 11:57:34.821919       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 11:57:34.921845       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 11:57:48 old-k8s-version-952358 kubelet[778]: E1101 11:57:48.762879     778 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 11:57:48 old-k8s-version-952358 kubelet[778]: E1101 11:57:48.763057     778 projected.go:198] Error preparing data for projected volume kube-api-access-42lnc for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-nhfb8: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 11:57:48 old-k8s-version-952358 kubelet[778]: E1101 11:57:48.763190     778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6cd70f9-cc0d-4ddf-9438-3f717d09de5d-kube-api-access-42lnc podName:b6cd70f9-cc0d-4ddf-9438-3f717d09de5d nodeName:}" failed. No retries permitted until 2025-11-01 11:57:49.263166689 +0000 UTC m=+19.419123396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-42lnc" (UniqueName: "kubernetes.io/projected/b6cd70f9-cc0d-4ddf-9438-3f717d09de5d-kube-api-access-42lnc") pod "kubernetes-dashboard-8694d4445c-nhfb8" (UID: "b6cd70f9-cc0d-4ddf-9438-3f717d09de5d") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 11:57:49 old-k8s-version-952358 kubelet[778]: W1101 11:57:49.638532     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/crio-edbd8683ccecc993aaab2a065bafebce5b4d2335d3b4990d14de07f322e71914 WatchSource:0}: Error finding container edbd8683ccecc993aaab2a065bafebce5b4d2335d3b4990d14de07f322e71914: Status 404 returned error can't find the container with id edbd8683ccecc993aaab2a065bafebce5b4d2335d3b4990d14de07f322e71914
	Nov 01 11:57:49 old-k8s-version-952358 kubelet[778]: W1101 11:57:49.668795     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/crio-73c58832ed164e25a380c7b9a360b70a63f6a6357e6987a2659b4b8823a65710 WatchSource:0}: Error finding container 73c58832ed164e25a380c7b9a360b70a63f6a6357e6987a2659b4b8823a65710: Status 404 returned error can't find the container with id 73c58832ed164e25a380c7b9a360b70a63f6a6357e6987a2659b4b8823a65710
	Nov 01 11:57:54 old-k8s-version-952358 kubelet[778]: I1101 11:57:54.195158     778 scope.go:117] "RemoveContainer" containerID="20f22c04e3d9401d26dfe124953b292e2fe7a51d21d61b0e9183ab72f1e5256a"
	Nov 01 11:57:55 old-k8s-version-952358 kubelet[778]: I1101 11:57:55.198730     778 scope.go:117] "RemoveContainer" containerID="20f22c04e3d9401d26dfe124953b292e2fe7a51d21d61b0e9183ab72f1e5256a"
	Nov 01 11:57:55 old-k8s-version-952358 kubelet[778]: I1101 11:57:55.198930     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:57:55 old-k8s-version-952358 kubelet[778]: E1101 11:57:55.199205     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:57:56 old-k8s-version-952358 kubelet[778]: I1101 11:57:56.207332     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:57:56 old-k8s-version-952358 kubelet[778]: E1101 11:57:56.207604     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:57:59 old-k8s-version-952358 kubelet[778]: I1101 11:57:59.619786     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:57:59 old-k8s-version-952358 kubelet[778]: E1101 11:57:59.620094     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:58:06 old-k8s-version-952358 kubelet[778]: I1101 11:58:06.279143     778 scope.go:117] "RemoveContainer" containerID="f807a58e116c1b5abd957b7ad73b4e5c5a22ce7eb21839a6557e000c7c9bc9dc"
	Nov 01 11:58:06 old-k8s-version-952358 kubelet[778]: I1101 11:58:06.326970     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-nhfb8" podStartSLOduration=10.519729975 podCreationTimestamp="2025-11-01 11:57:47 +0000 UTC" firstStartedPulling="2025-11-01 11:57:49.672315704 +0000 UTC m=+19.828272411" lastFinishedPulling="2025-11-01 11:57:58.470927398 +0000 UTC m=+28.626884105" observedRunningTime="2025-11-01 11:57:59.230703338 +0000 UTC m=+29.386660053" watchObservedRunningTime="2025-11-01 11:58:06.318341669 +0000 UTC m=+36.474298375"
	Nov 01 11:58:12 old-k8s-version-952358 kubelet[778]: I1101 11:58:12.018416     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:58:12 old-k8s-version-952358 kubelet[778]: I1101 11:58:12.298280     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:58:12 old-k8s-version-952358 kubelet[778]: I1101 11:58:12.298485     778 scope.go:117] "RemoveContainer" containerID="badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64"
	Nov 01 11:58:12 old-k8s-version-952358 kubelet[778]: E1101 11:58:12.298763     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:58:19 old-k8s-version-952358 kubelet[778]: I1101 11:58:19.619069     778 scope.go:117] "RemoveContainer" containerID="badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64"
	Nov 01 11:58:19 old-k8s-version-952358 kubelet[778]: E1101 11:58:19.619850     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:58:28 old-k8s-version-952358 kubelet[778]: I1101 11:58:28.715683     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 11:58:28 old-k8s-version-952358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 11:58:28 old-k8s-version-952358 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 11:58:28 old-k8s-version-952358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [64a312d9f4c53e1433cc7d19282cc174c8a3ba911a400b3129ce54fe724fd5ba] <==
	2025/11/01 11:57:58 Using namespace: kubernetes-dashboard
	2025/11/01 11:57:58 Using in-cluster config to connect to apiserver
	2025/11/01 11:57:58 Using secret token for csrf signing
	2025/11/01 11:57:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 11:57:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 11:57:58 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 11:57:58 Generating JWE encryption key
	2025/11/01 11:57:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 11:57:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 11:57:58 Initializing JWE encryption key from synchronized object
	2025/11/01 11:57:58 Creating in-cluster Sidecar client
	2025/11/01 11:57:58 Serving insecurely on HTTP port: 9090
	2025/11/01 11:57:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 11:58:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 11:57:58 Starting overwatch
	
	
	==> storage-provisioner [095e20aeaa7c4e797b3b245447eb78281f018d603dfb8a3b04200dd8864113ff] <==
	I1101 11:58:06.349419       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 11:58:06.364298       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 11:58:06.364438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 11:58:23.767037       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 11:58:23.767215       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-952358_6d4f130f-9148-476c-917b-38a958fd9a9d!
	I1101 11:58:23.767714       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5ce5b6b-8d31-4770-8329-c46e139ecfe3", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-952358_6d4f130f-9148-476c-917b-38a958fd9a9d became leader
	I1101 11:58:23.867499       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-952358_6d4f130f-9148-476c-917b-38a958fd9a9d!
	
	
	==> storage-provisioner [f807a58e116c1b5abd957b7ad73b4e5c5a22ce7eb21839a6557e000c7c9bc9dc] <==
	I1101 11:57:35.563945       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 11:58:05.566170       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-952358 -n old-k8s-version-952358
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-952358 -n old-k8s-version-952358: exit status 2 (366.887283ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-952358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-952358
helpers_test.go:243: (dbg) docker inspect old-k8s-version-952358:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431",
	        "Created": "2025-11-01T11:56:05.046595205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717405,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:57:23.149091417Z",
	            "FinishedAt": "2025-11-01T11:57:22.313603122Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/hostname",
	        "HostsPath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/hosts",
	        "LogPath": "/var/lib/docker/containers/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431-json.log",
	        "Name": "/old-k8s-version-952358",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-952358:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-952358",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431",
	                "LowerDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e561ba643a82c8ab2485d02c74b5f1d8ae7f554c664131f07a881a19d1b9f455/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-952358",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-952358/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-952358",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-952358",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-952358",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d13165a107ed345cbab6d0976f57f64674b6043e76b959752e7a98e2d1cdd11",
	            "SandboxKey": "/var/run/docker/netns/9d13165a107e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33780"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33781"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33782"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-952358": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:73:c8:84:1b:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c9bca57e57ae79fd54c9c7ebc4412107912a1f60b0190f08a0287f153c5cacff",
	                    "EndpointID": "2ab55e7250bdb9385315d6bf20f7694fa9e4abe7ea5b6729fc60a405e6cadb98",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-952358",
	                        "5af3c19b6c57"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-952358 -n old-k8s-version-952358
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-952358 -n old-k8s-version-952358: exit status 2 (370.727054ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-952358 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-952358 logs -n 25: (1.292801443s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-507511 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo containerd config dump                                                                                                                                                                                                  │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ ssh     │ -p cilium-507511 sudo crio config                                                                                                                                                                                                             │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ delete  │ -p cilium-507511                                                                                                                                                                                                                              │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p force-systemd-env-857548 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-857548  │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ force-systemd-flag-643844 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-643844 │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ delete  │ -p force-systemd-flag-643844                                                                                                                                                                                                                  │ force-systemd-flag-643844 │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-534694    │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p force-systemd-env-857548                                                                                                                                                                                                                   │ force-systemd-env-857548  │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p cert-options-505831 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ cert-options-505831 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ -p cert-options-505831 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p cert-options-505831                                                                                                                                                                                                                        │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │                     │
	│ stop    │ -p old-k8s-version-952358 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-952358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:58 UTC │
	│ image   │ old-k8s-version-952358 image list --format=json                                                                                                                                                                                               │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ pause   │ -p old-k8s-version-952358 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:57:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:57:22.879638  717279 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:57:22.879752  717279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:57:22.879763  717279 out.go:374] Setting ErrFile to fd 2...
	I1101 11:57:22.879769  717279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:57:22.880047  717279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:57:22.880405  717279 out.go:368] Setting JSON to false
	I1101 11:57:22.881310  717279 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13192,"bootTime":1761985051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:57:22.881377  717279 start.go:143] virtualization:  
	I1101 11:57:22.884444  717279 out.go:179] * [old-k8s-version-952358] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:57:22.888557  717279 notify.go:221] Checking for updates...
	I1101 11:57:22.888527  717279 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:57:22.892344  717279 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:57:22.895242  717279 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:57:22.898140  717279 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:57:22.901131  717279 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:57:22.904071  717279 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:57:22.907375  717279 config.go:182] Loaded profile config "old-k8s-version-952358": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 11:57:22.910785  717279 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 11:57:22.913725  717279 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:57:22.938749  717279 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:57:22.938872  717279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:57:22.994226  717279 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:57:22.984313991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:57:22.994335  717279 docker.go:319] overlay module found
	I1101 11:57:22.997374  717279 out.go:179] * Using the docker driver based on existing profile
	I1101 11:57:23.000129  717279 start.go:309] selected driver: docker
	I1101 11:57:23.000149  717279 start.go:930] validating driver "docker" against &{Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:57:23.000255  717279 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:57:23.001001  717279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:57:23.064831  717279 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:57:23.055780685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:57:23.065196  717279 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:57:23.065228  717279 cni.go:84] Creating CNI manager for ""
	I1101 11:57:23.065285  717279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:57:23.065323  717279 start.go:353] cluster config:
	{Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:57:23.068574  717279 out.go:179] * Starting "old-k8s-version-952358" primary control-plane node in "old-k8s-version-952358" cluster
	I1101 11:57:23.071366  717279 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:57:23.074306  717279 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:57:23.077080  717279 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 11:57:23.077152  717279 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 11:57:23.077155  717279 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:57:23.077167  717279 cache.go:59] Caching tarball of preloaded images
	I1101 11:57:23.077265  717279 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:57:23.077275  717279 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 11:57:23.077385  717279 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/config.json ...
	I1101 11:57:23.095733  717279 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:57:23.095759  717279 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:57:23.095778  717279 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:57:23.095802  717279 start.go:360] acquireMachinesLock for old-k8s-version-952358: {Name:mk5b8de3b8dc99aca4b3c9de9389ab7eb20d4d78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:57:23.095864  717279 start.go:364] duration metric: took 35.643µs to acquireMachinesLock for "old-k8s-version-952358"
	I1101 11:57:23.095888  717279 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:57:23.095893  717279 fix.go:54] fixHost starting: 
	I1101 11:57:23.096157  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:23.113545  717279 fix.go:112] recreateIfNeeded on old-k8s-version-952358: state=Stopped err=<nil>
	W1101 11:57:23.113590  717279 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:57:23.116726  717279 out.go:252] * Restarting existing docker container for "old-k8s-version-952358" ...
	I1101 11:57:23.116810  717279 cli_runner.go:164] Run: docker start old-k8s-version-952358
	I1101 11:57:23.388246  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:23.411074  717279 kic.go:430] container "old-k8s-version-952358" state is running.
	I1101 11:57:23.411445  717279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-952358
	I1101 11:57:23.435622  717279 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/config.json ...
	I1101 11:57:23.435850  717279 machine.go:94] provisionDockerMachine start ...
	I1101 11:57:23.435921  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:23.456844  717279 main.go:143] libmachine: Using SSH client type: native
	I1101 11:57:23.457166  717279 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33780 <nil> <nil>}
	I1101 11:57:23.457186  717279 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:57:23.458085  717279 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 11:57:26.606427  717279 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-952358
	
	I1101 11:57:26.606462  717279 ubuntu.go:182] provisioning hostname "old-k8s-version-952358"
	I1101 11:57:26.606527  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:26.625682  717279 main.go:143] libmachine: Using SSH client type: native
	I1101 11:57:26.626138  717279 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33780 <nil> <nil>}
	I1101 11:57:26.626161  717279 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-952358 && echo "old-k8s-version-952358" | sudo tee /etc/hostname
	I1101 11:57:26.787705  717279 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-952358
	
	I1101 11:57:26.787832  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:26.807628  717279 main.go:143] libmachine: Using SSH client type: native
	I1101 11:57:26.807947  717279 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33780 <nil> <nil>}
	I1101 11:57:26.807970  717279 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-952358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-952358/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-952358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:57:26.957856  717279 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:57:26.957880  717279 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:57:26.957907  717279 ubuntu.go:190] setting up certificates
	I1101 11:57:26.957923  717279 provision.go:84] configureAuth start
	I1101 11:57:26.957988  717279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-952358
	I1101 11:57:26.974936  717279 provision.go:143] copyHostCerts
	I1101 11:57:26.975005  717279 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:57:26.975029  717279 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:57:26.975119  717279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:57:26.975282  717279 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:57:26.975294  717279 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:57:26.975324  717279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:57:26.975394  717279 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:57:26.975403  717279 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:57:26.975431  717279 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:57:26.975489  717279 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-952358 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-952358]
	I1101 11:57:27.278768  717279 provision.go:177] copyRemoteCerts
	I1101 11:57:27.278840  717279 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:57:27.278887  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:27.297517  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:27.405501  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:57:27.428391  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 11:57:27.448527  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:57:27.468541  717279 provision.go:87] duration metric: took 510.601361ms to configureAuth
	I1101 11:57:27.468609  717279 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:57:27.468812  717279 config.go:182] Loaded profile config "old-k8s-version-952358": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 11:57:27.468925  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:27.486525  717279 main.go:143] libmachine: Using SSH client type: native
	I1101 11:57:27.486832  717279 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33780 <nil> <nil>}
	I1101 11:57:27.486851  717279 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:57:27.810256  717279 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:57:27.810281  717279 machine.go:97] duration metric: took 4.374415217s to provisionDockerMachine
	I1101 11:57:27.810292  717279 start.go:293] postStartSetup for "old-k8s-version-952358" (driver="docker")
	I1101 11:57:27.810320  717279 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:57:27.810405  717279 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:57:27.810465  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:27.829019  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:27.933510  717279 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:57:27.936906  717279 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:57:27.936936  717279 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:57:27.936948  717279 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:57:27.937000  717279 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:57:27.937087  717279 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:57:27.937209  717279 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:57:27.944680  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:57:27.963444  717279 start.go:296] duration metric: took 153.137505ms for postStartSetup
	I1101 11:57:27.963603  717279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:57:27.963667  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:27.983822  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:28.087108  717279 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:57:28.092073  717279 fix.go:56] duration metric: took 4.996171668s for fixHost
	I1101 11:57:28.092100  717279 start.go:83] releasing machines lock for "old-k8s-version-952358", held for 4.996222229s
	I1101 11:57:28.092170  717279 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-952358
	I1101 11:57:28.109517  717279 ssh_runner.go:195] Run: cat /version.json
	I1101 11:57:28.109556  717279 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:57:28.109582  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:28.109620  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:28.135446  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:28.135969  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:28.237324  717279 ssh_runner.go:195] Run: systemctl --version
	I1101 11:57:28.328606  717279 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:57:28.366007  717279 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:57:28.370661  717279 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:57:28.370752  717279 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:57:28.378781  717279 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 11:57:28.378816  717279 start.go:496] detecting cgroup driver to use...
	I1101 11:57:28.378866  717279 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:57:28.378938  717279 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:57:28.394033  717279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:57:28.407069  717279 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:57:28.407164  717279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:57:28.422477  717279 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:57:28.436222  717279 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:57:28.555521  717279 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:57:28.686703  717279 docker.go:234] disabling docker service ...
	I1101 11:57:28.686821  717279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:57:28.701458  717279 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:57:28.714698  717279 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:57:28.842367  717279 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:57:28.958762  717279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:57:28.972182  717279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:57:28.987296  717279 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 11:57:28.987413  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:28.996677  717279 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:57:28.996746  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.007561  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.017431  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.026797  717279 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:57:29.041845  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.050846  717279 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.059453  717279 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:57:29.068225  717279 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:57:29.076123  717279 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:57:29.083753  717279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:57:29.202519  717279 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:57:29.345609  717279 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:57:29.345823  717279 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:57:29.350295  717279 start.go:564] Will wait 60s for crictl version
	I1101 11:57:29.350410  717279 ssh_runner.go:195] Run: which crictl
	I1101 11:57:29.354420  717279 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:57:29.384270  717279 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:57:29.384436  717279 ssh_runner.go:195] Run: crio --version
	I1101 11:57:29.421338  717279 ssh_runner.go:195] Run: crio --version
	I1101 11:57:29.454507  717279 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 11:57:29.457364  717279 cli_runner.go:164] Run: docker network inspect old-k8s-version-952358 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:57:29.474232  717279 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 11:57:29.478007  717279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:57:29.488782  717279 kubeadm.go:884] updating cluster {Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:57:29.488894  717279 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 11:57:29.488950  717279 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:57:29.523452  717279 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:57:29.523479  717279 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:57:29.523537  717279 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:57:29.556013  717279 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:57:29.556037  717279 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:57:29.556046  717279 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1101 11:57:29.556153  717279 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-952358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:57:29.556251  717279 ssh_runner.go:195] Run: crio config
	I1101 11:57:29.634445  717279 cni.go:84] Creating CNI manager for ""
	I1101 11:57:29.634470  717279 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:57:29.634484  717279 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:57:29.634507  717279 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-952358 NodeName:old-k8s-version-952358 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:57:29.634654  717279 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-952358"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:57:29.634732  717279 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 11:57:29.642773  717279 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:57:29.642852  717279 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:57:29.650751  717279 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 11:57:29.664644  717279 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:57:29.679356  717279 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 11:57:29.692567  717279 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:57:29.697030  717279 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:57:29.706728  717279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:57:29.824660  717279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:57:29.842238  717279 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358 for IP: 192.168.85.2
	I1101 11:57:29.842308  717279 certs.go:195] generating shared ca certs ...
	I1101 11:57:29.842340  717279 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:57:29.842523  717279 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:57:29.842598  717279 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:57:29.842622  717279 certs.go:257] generating profile certs ...
	I1101 11:57:29.842748  717279 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.key
	I1101 11:57:29.842845  717279 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key.1ce2c540
	I1101 11:57:29.842919  717279 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.key
	I1101 11:57:29.843055  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:57:29.843115  717279 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:57:29.843140  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:57:29.843187  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:57:29.843242  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:57:29.843297  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:57:29.843369  717279 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:57:29.843986  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:57:29.867255  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:57:29.890600  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:57:29.921077  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:57:29.946621  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 11:57:29.973520  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:57:29.997365  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:57:30.039029  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:57:30.064879  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:57:30.099222  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:57:30.132835  717279 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:57:30.163610  717279 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:57:30.180031  717279 ssh_runner.go:195] Run: openssl version
	I1101 11:57:30.189394  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:57:30.199169  717279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:57:30.203537  717279 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:57:30.203666  717279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:57:30.246439  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:57:30.255335  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:57:30.264484  717279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:57:30.268418  717279 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:57:30.268488  717279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:57:30.310721  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:57:30.318881  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:57:30.327417  717279 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:57:30.331203  717279 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:57:30.331333  717279 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:57:30.373107  717279 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:57:30.381271  717279 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:57:30.385229  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:57:30.426908  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:57:30.477440  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:57:30.526485  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:57:30.587441  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:57:30.662155  717279 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:57:30.777108  717279 kubeadm.go:401] StartCluster: {Name:old-k8s-version-952358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-952358 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:57:30.777222  717279 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:57:30.777289  717279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:57:30.842798  717279 cri.go:89] found id: "9862e16108821021bd8df93bedbfb37c346a536b912b903ad541724af8a95a63"
	I1101 11:57:30.842821  717279 cri.go:89] found id: "aada77cf39436aec3b32621421a714661e50a1fe93ec37e9d9c39d42ba5b50be"
	I1101 11:57:30.842826  717279 cri.go:89] found id: "8f5fc92ea368a4e5105ba5fa2ece7de8be48ea25eaad1c294bbcbf46af48d339"
	I1101 11:57:30.842838  717279 cri.go:89] found id: "bd67e3cf9727246ce5753ba2dc1d2d69471c2daad2ac92a051f96d8686b8be86"
	I1101 11:57:30.842843  717279 cri.go:89] found id: ""
	I1101 11:57:30.842893  717279 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 11:57:30.862324  717279 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T11:57:30Z" level=error msg="open /run/runc: no such file or directory"
	I1101 11:57:30.862417  717279 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:57:30.876163  717279 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:57:30.876187  717279 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:57:30.876242  717279 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:57:30.886217  717279 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:57:30.886859  717279 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-952358" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:57:30.887146  717279 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-952358" cluster setting kubeconfig missing "old-k8s-version-952358" context setting]
	I1101 11:57:30.887622  717279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:57:30.889371  717279 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:57:30.901872  717279 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 11:57:30.901907  717279 kubeadm.go:602] duration metric: took 25.713579ms to restartPrimaryControlPlane
	I1101 11:57:30.901917  717279 kubeadm.go:403] duration metric: took 124.820734ms to StartCluster
	I1101 11:57:30.901932  717279 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:57:30.902001  717279 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:57:30.902848  717279 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:57:30.903045  717279 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:57:30.903430  717279 config.go:182] Loaded profile config "old-k8s-version-952358": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 11:57:30.903423  717279 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:57:30.903510  717279 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-952358"
	I1101 11:57:30.903525  717279 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-952358"
	W1101 11:57:30.903531  717279 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:57:30.903556  717279 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:57:30.904013  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:30.904174  717279 addons.go:70] Setting dashboard=true in profile "old-k8s-version-952358"
	I1101 11:57:30.904186  717279 addons.go:239] Setting addon dashboard=true in "old-k8s-version-952358"
	W1101 11:57:30.904192  717279 addons.go:248] addon dashboard should already be in state true
	I1101 11:57:30.904212  717279 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:57:30.904516  717279 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-952358"
	I1101 11:57:30.904534  717279 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-952358"
	I1101 11:57:30.904588  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:30.904821  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:30.908895  717279 out.go:179] * Verifying Kubernetes components...
	I1101 11:57:30.914312  717279 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:57:30.972925  717279 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:57:30.972925  717279 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:57:30.977010  717279 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:57:30.977034  717279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:57:30.977102  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:30.980433  717279 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:57:30.983296  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:57:30.983321  717279 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:57:30.983405  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:30.989086  717279 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-952358"
	W1101 11:57:30.989110  717279 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:57:30.989137  717279 host.go:66] Checking if "old-k8s-version-952358" exists ...
	I1101 11:57:30.989557  717279 cli_runner.go:164] Run: docker container inspect old-k8s-version-952358 --format={{.State.Status}}
	I1101 11:57:31.031988  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:31.056993  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:31.059745  717279 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:57:31.059766  717279 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:57:31.059830  717279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-952358
	I1101 11:57:31.086698  717279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33780 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/old-k8s-version-952358/id_rsa Username:docker}
	I1101 11:57:31.266532  717279 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:57:31.292234  717279 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-952358" to be "Ready" ...
	I1101 11:57:31.298972  717279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:57:31.339652  717279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:57:31.403704  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:57:31.403768  717279 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:57:31.450821  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:57:31.450894  717279 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:57:31.503548  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:57:31.503619  717279 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:57:31.571455  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:57:31.571516  717279 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:57:31.645620  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:57:31.645708  717279 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:57:31.677942  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:57:31.678018  717279 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:57:31.715886  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:57:31.715959  717279 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:57:31.740414  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:57:31.740491  717279 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:57:31.767443  717279 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:57:31.767517  717279 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:57:31.791195  717279 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:57:34.755446  717279 node_ready.go:49] node "old-k8s-version-952358" is "Ready"
	I1101 11:57:34.755472  717279 node_ready.go:38] duration metric: took 3.463154815s for node "old-k8s-version-952358" to be "Ready" ...
	I1101 11:57:34.755484  717279 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:57:34.755544  717279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:57:35.803404  717279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.50435892s)
	I1101 11:57:36.338141  717279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.998408257s)
	I1101 11:57:36.842635  717279 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.05134942s)
	I1101 11:57:36.842904  717279 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.087347137s)
	I1101 11:57:36.842925  717279 api_server.go:72] duration metric: took 5.939854268s to wait for apiserver process to appear ...
	I1101 11:57:36.842931  717279 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:57:36.842950  717279 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 11:57:36.845979  717279 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-952358 addons enable metrics-server
	
	I1101 11:57:36.848865  717279 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1101 11:57:36.851715  717279 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 11:57:36.852058  717279 addons.go:515] duration metric: took 5.948637454s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1101 11:57:36.853097  717279 api_server.go:141] control plane version: v1.28.0
	I1101 11:57:36.853121  717279 api_server.go:131] duration metric: took 10.184402ms to wait for apiserver health ...
	I1101 11:57:36.853131  717279 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:57:36.856789  717279 system_pods.go:59] 8 kube-system pods found
	I1101 11:57:36.856822  717279 system_pods.go:61] "coredns-5dd5756b68-pmb27" [5ed95095-99da-4744-9e27-3c17af6a824a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:57:36.856838  717279 system_pods.go:61] "etcd-old-k8s-version-952358" [47a39b81-001d-4c6f-8c0d-c5f3f4785421] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:57:36.856844  717279 system_pods.go:61] "kindnet-sn7mz" [552a2264-bdd9-4b5f-b48c-369e6eff47aa] Running
	I1101 11:57:36.856852  717279 system_pods.go:61] "kube-apiserver-old-k8s-version-952358" [e51ba789-bf75-410a-95f8-3d02157e11b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:57:36.856865  717279 system_pods.go:61] "kube-controller-manager-old-k8s-version-952358" [e54caac4-1422-4a20-9dbb-fbceea3bc4db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:57:36.856871  717279 system_pods.go:61] "kube-proxy-kmxd8" [5424cb6f-ae01-4a4c-a66d-4c079aef46c6] Running
	I1101 11:57:36.856880  717279 system_pods.go:61] "kube-scheduler-old-k8s-version-952358" [4e5fe046-ae08-40a7-825e-fa77da451c18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:57:36.856893  717279 system_pods.go:61] "storage-provisioner" [caedd5ef-fa47-4b4e-b104-945d4b554f7f] Running
	I1101 11:57:36.856899  717279 system_pods.go:74] duration metric: took 3.762969ms to wait for pod list to return data ...
	I1101 11:57:36.856918  717279 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:57:36.865921  717279 default_sa.go:45] found service account: "default"
	I1101 11:57:36.865949  717279 default_sa.go:55] duration metric: took 9.02428ms for default service account to be created ...
	I1101 11:57:36.865959  717279 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:57:36.869213  717279 system_pods.go:86] 8 kube-system pods found
	I1101 11:57:36.869245  717279 system_pods.go:89] "coredns-5dd5756b68-pmb27" [5ed95095-99da-4744-9e27-3c17af6a824a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:57:36.869254  717279 system_pods.go:89] "etcd-old-k8s-version-952358" [47a39b81-001d-4c6f-8c0d-c5f3f4785421] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:57:36.869261  717279 system_pods.go:89] "kindnet-sn7mz" [552a2264-bdd9-4b5f-b48c-369e6eff47aa] Running
	I1101 11:57:36.869268  717279 system_pods.go:89] "kube-apiserver-old-k8s-version-952358" [e51ba789-bf75-410a-95f8-3d02157e11b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:57:36.869280  717279 system_pods.go:89] "kube-controller-manager-old-k8s-version-952358" [e54caac4-1422-4a20-9dbb-fbceea3bc4db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:57:36.869290  717279 system_pods.go:89] "kube-proxy-kmxd8" [5424cb6f-ae01-4a4c-a66d-4c079aef46c6] Running
	I1101 11:57:36.869296  717279 system_pods.go:89] "kube-scheduler-old-k8s-version-952358" [4e5fe046-ae08-40a7-825e-fa77da451c18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:57:36.869304  717279 system_pods.go:89] "storage-provisioner" [caedd5ef-fa47-4b4e-b104-945d4b554f7f] Running
	I1101 11:57:36.869313  717279 system_pods.go:126] duration metric: took 3.347752ms to wait for k8s-apps to be running ...
	I1101 11:57:36.869325  717279 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:57:36.869384  717279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:57:36.909778  717279 system_svc.go:56] duration metric: took 40.442761ms WaitForService to wait for kubelet
	I1101 11:57:36.909811  717279 kubeadm.go:587] duration metric: took 6.006738724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:57:36.909832  717279 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:57:36.912603  717279 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:57:36.912636  717279 node_conditions.go:123] node cpu capacity is 2
	I1101 11:57:36.912648  717279 node_conditions.go:105] duration metric: took 2.810785ms to run NodePressure ...
	I1101 11:57:36.912661  717279 start.go:242] waiting for startup goroutines ...
	I1101 11:57:36.912668  717279 start.go:247] waiting for cluster config update ...
	I1101 11:57:36.912679  717279 start.go:256] writing updated cluster config ...
	I1101 11:57:36.912965  717279 ssh_runner.go:195] Run: rm -f paused
	I1101 11:57:36.917383  717279 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:57:36.921775  717279 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pmb27" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:57:38.927346  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:40.933149  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:43.428077  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:45.927351  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:47.928130  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:49.929126  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:52.428270  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:54.428379  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:56.428431  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:57:58.928926  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:01.427902  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:03.428572  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:05.927537  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:08.427589  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:10.928139  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	W1101 11:58:13.427849  717279 pod_ready.go:104] pod "coredns-5dd5756b68-pmb27" is not "Ready", error: <nil>
	I1101 11:58:14.934614  717279 pod_ready.go:94] pod "coredns-5dd5756b68-pmb27" is "Ready"
	I1101 11:58:14.934645  717279 pod_ready.go:86] duration metric: took 38.012840589s for pod "coredns-5dd5756b68-pmb27" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.938147  717279 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.944885  717279 pod_ready.go:94] pod "etcd-old-k8s-version-952358" is "Ready"
	I1101 11:58:14.944915  717279 pod_ready.go:86] duration metric: took 6.743693ms for pod "etcd-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.948129  717279 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.954327  717279 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-952358" is "Ready"
	I1101 11:58:14.954357  717279 pod_ready.go:86] duration metric: took 6.196872ms for pod "kube-apiserver-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:14.957620  717279 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:15.126644  717279 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-952358" is "Ready"
	I1101 11:58:15.126676  717279 pod_ready.go:86] duration metric: took 169.030593ms for pod "kube-controller-manager-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:15.327007  717279 pod_ready.go:83] waiting for pod "kube-proxy-kmxd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:15.725731  717279 pod_ready.go:94] pod "kube-proxy-kmxd8" is "Ready"
	I1101 11:58:15.725758  717279 pod_ready.go:86] duration metric: took 398.677067ms for pod "kube-proxy-kmxd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:15.926436  717279 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:16.325895  717279 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-952358" is "Ready"
	I1101 11:58:16.325922  717279 pod_ready.go:86] duration metric: took 399.461938ms for pod "kube-scheduler-old-k8s-version-952358" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:58:16.325935  717279 pod_ready.go:40] duration metric: took 39.408517431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:58:16.384975  717279 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 11:58:16.388046  717279 out.go:203] 
	W1101 11:58:16.390923  717279 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 11:58:16.393734  717279 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 11:58:16.396578  717279 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-952358" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.023143659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.030726362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.032034719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.054225676Z" level=info msg="Created container badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd/dashboard-metrics-scraper" id=db063886-4209-4d68-9164-fdbfcde2091e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.057407961Z" level=info msg="Starting container: badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64" id=36757657-5124-4d48-a221-d58ea2ada9b6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.062519182Z" level=info msg="Started container" PID=1643 containerID=badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd/dashboard-metrics-scraper id=36757657-5124-4d48-a221-d58ea2ada9b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=edbd8683ccecc993aaab2a065bafebce5b4d2335d3b4990d14de07f322e71914
	Nov 01 11:58:12 old-k8s-version-952358 conmon[1641]: conmon badf1228f71bd4d5c2c3 <ninfo>: container 1643 exited with status 1
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.300113597Z" level=info msg="Removing container: 3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58" id=dc897409-ef04-4b33-98b1-527b76a45612 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.310324043Z" level=info msg="Error loading conmon cgroup of container 3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58: cgroup deleted" id=dc897409-ef04-4b33-98b1-527b76a45612 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 11:58:12 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:12.313497761Z" level=info msg="Removed container 3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd/dashboard-metrics-scraper" id=dc897409-ef04-4b33-98b1-527b76a45612 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.841116469Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.845812563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.845849174Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.845873839Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.849483108Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.849520623Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.849544098Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.853083779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.853120193Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.853145285Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.856392359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.856429003Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.856454333Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.859818782Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 11:58:15 old-k8s-version-952358 crio[651]: time="2025-11-01T11:58:15.859853458Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	badf1228f71bd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   edbd8683ccecc       dashboard-metrics-scraper-5f989dc9cf-xn7cd       kubernetes-dashboard
	095e20aeaa7c4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   7e79d97623751       storage-provisioner                              kube-system
	64a312d9f4c53       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   73c58832ed164       kubernetes-dashboard-8694d4445c-nhfb8            kubernetes-dashboard
	f088bfba05aee       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   0a99d8b18ef3e       busybox                                          default
	1673b27ac77a2       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           57 seconds ago       Running             coredns                     1                   f0a9201055483       coredns-5dd5756b68-pmb27                         kube-system
	f807a58e116c1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   7e79d97623751       storage-provisioner                              kube-system
	d4b9fdc04889e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   4d64e0f152004       kindnet-sn7mz                                    kube-system
	700b0703d579a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           58 seconds ago       Running             kube-proxy                  1                   4d9affad8b0ea       kube-proxy-kmxd8                                 kube-system
	9862e16108821       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   28543c92fa70b       etcd-old-k8s-version-952358                      kube-system
	aada77cf39436       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   7b1a4e0209855       kube-scheduler-old-k8s-version-952358            kube-system
	8f5fc92ea368a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   2494a3565646a       kube-apiserver-old-k8s-version-952358            kube-system
	bd67e3cf97272       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   0c3b8ff5c7d58       kube-controller-manager-old-k8s-version-952358   kube-system
	
	
	==> coredns [1673b27ac77a27952be16353ca8f921102c799a4d215bc4be1208247b488b327] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57697 - 29279 "HINFO IN 7631426333321470715.1191036699852579226. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026269705s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-952358
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-952358
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=old-k8s-version-952358
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_56_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:56:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-952358
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:58:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:58:05 +0000   Sat, 01 Nov 2025 11:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:58:05 +0000   Sat, 01 Nov 2025 11:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:58:05 +0000   Sat, 01 Nov 2025 11:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:58:05 +0000   Sat, 01 Nov 2025 11:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-952358
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                dbeefb29-03d1-48b6-93d2-8db0a71a3a9e
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-pmb27                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-old-k8s-version-952358                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-sn7mz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-952358             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-952358    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-kmxd8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-952358             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-xn7cd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-nhfb8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s               kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s               kubelet          Node old-k8s-version-952358 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s               kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s               node-controller  Node old-k8s-version-952358 event: Registered Node old-k8s-version-952358 in Controller
	  Normal  NodeReady                98s                kubelet          Node old-k8s-version-952358 status is now: NodeReady
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node old-k8s-version-952358 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node old-k8s-version-952358 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node old-k8s-version-952358 event: Registered Node old-k8s-version-952358 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:34] overlayfs: idmapped layers are currently not supported
	[ +35.784283] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9862e16108821021bd8df93bedbfb37c346a536b912b903ad541724af8a95a63] <==
	{"level":"info","ts":"2025-11-01T11:57:30.895422Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T11:57:30.895488Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T11:57:30.903342Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9f0758e1c58a86ed","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-01T11:57:30.911515Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T11:57:30.919308Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T11:57:30.919405Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T11:57:30.911554Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T11:57:30.921743Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T11:57:30.911786Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T11:57:30.921836Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T11:57:30.921875Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T11:57:31.071394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T11:57:31.071453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T11:57:31.071492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T11:57:31.071506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T11:57:31.071512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T11:57:31.071528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-01T11:57:31.071537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T11:57:31.082253Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-952358 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T11:57:31.082295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:57:31.083655Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-01T11:57:31.082306Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:57:31.102356Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T11:57:31.173104Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T11:57:31.173145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:58:33 up  3:41,  0 user,  load average: 1.62, 2.85, 2.51
	Linux old-k8s-version-952358 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d4b9fdc04889ed0379546b17184bd798a9da3238ce38b3beb28e6b6a07f5d656] <==
	I1101 11:57:35.621977       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 11:57:35.622381       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 11:57:35.622549       1 main.go:148] setting mtu 1500 for CNI 
	I1101 11:57:35.622590       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 11:57:35.622629       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T11:57:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 11:57:35.839470       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 11:57:35.839488       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 11:57:35.839497       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 11:57:35.839773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 11:58:05.840382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 11:58:05.840382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 11:58:05.840506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 11:58:05.840602       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 11:58:07.140607       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 11:58:07.140635       1 metrics.go:72] Registering metrics
	I1101 11:58:07.140711       1 controller.go:711] "Syncing nftables rules"
	I1101 11:58:15.839551       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 11:58:15.840806       1 main.go:301] handling current node
	I1101 11:58:25.843971       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 11:58:25.844036       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f5fc92ea368a4e5105ba5fa2ece7de8be48ea25eaad1c294bbcbf46af48d339] <==
	I1101 11:57:34.777612       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 11:57:34.778925       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 11:57:34.778950       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1101 11:57:34.792650       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	I1101 11:57:34.842113       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 11:57:34.855104       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 11:57:34.879667       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1101 11:57:34.888069       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 11:57:34.892925       1 cache.go:39] Caches are synced for autoregister controller
	I1101 11:57:35.453186       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 11:57:36.646031       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 11:57:36.697550       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 11:57:36.725849       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:57:36.739731       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:57:36.763172       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 11:57:36.817550       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.192.249"}
	I1101 11:57:36.834314       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.102.94"}
	E1101 11:57:44.780374       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	I1101 11:57:47.472361       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 11:57:47.553832       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 11:57:47.632845       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1101 11:57:54.781139       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1101 11:58:04.785386       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E1101 11:58:14.786735       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1101 11:58:24.787641       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [bd67e3cf9727246ce5753ba2dc1d2d69471c2daad2ac92a051f96d8686b8be86] <==
	I1101 11:57:47.498518       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-xn7cd"
	I1101 11:57:47.508766       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-nhfb8"
	I1101 11:57:47.538013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.079929ms"
	I1101 11:57:47.554681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.628868ms"
	I1101 11:57:47.576312       1 event.go:307] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I1101 11:57:47.576746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.007135ms"
	I1101 11:57:47.577535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.609µs"
	I1101 11:57:47.585600       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1101 11:57:47.585743       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1101 11:57:47.613440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.359822ms"
	I1101 11:57:47.614145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.165µs"
	I1101 11:57:47.643110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.547573ms"
	I1101 11:57:47.643295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.084µs"
	I1101 11:57:47.679836       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 11:57:47.699454       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 11:57:47.699484       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 11:57:54.224415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="5.416275ms"
	I1101 11:57:55.215785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.753µs"
	I1101 11:57:56.224771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.517µs"
	I1101 11:57:59.247552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.650656ms"
	I1101 11:57:59.247638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.023µs"
	I1101 11:58:12.324468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.035µs"
	I1101 11:58:14.615059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.441763ms"
	I1101 11:58:14.615191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.328µs"
	I1101 11:58:19.633968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.209µs"
	
	
	==> kube-proxy [700b0703d579a0b705abcfec7e4dc2f3e95f8991206525d04be84898e72ba25d] <==
	I1101 11:57:35.692948       1 server_others.go:69] "Using iptables proxy"
	I1101 11:57:35.720270       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1101 11:57:35.934396       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:57:35.936506       1 server_others.go:152] "Using iptables Proxier"
	I1101 11:57:35.936549       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 11:57:35.936557       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 11:57:35.936582       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 11:57:35.936778       1 server.go:846] "Version info" version="v1.28.0"
	I1101 11:57:35.936809       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:57:35.943555       1 config.go:188] "Starting service config controller"
	I1101 11:57:35.943577       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 11:57:35.943593       1 config.go:97] "Starting endpoint slice config controller"
	I1101 11:57:35.943597       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 11:57:35.943989       1 config.go:315] "Starting node config controller"
	I1101 11:57:35.943996       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 11:57:36.049796       1 shared_informer.go:318] Caches are synced for node config
	I1101 11:57:36.049842       1 shared_informer.go:318] Caches are synced for service config
	I1101 11:57:36.049882       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [aada77cf39436aec3b32621421a714661e50a1fe93ec37e9d9c39d42ba5b50be] <==
	I1101 11:57:32.899184       1 serving.go:348] Generated self-signed cert in-memory
	W1101 11:57:34.678166       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 11:57:34.678198       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 11:57:34.678208       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 11:57:34.678217       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 11:57:34.815954       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 11:57:34.816056       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:57:34.820984       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:57:34.821092       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 11:57:34.821846       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 11:57:34.821919       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 11:57:34.921845       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 11:57:48 old-k8s-version-952358 kubelet[778]: E1101 11:57:48.762879     778 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 11:57:48 old-k8s-version-952358 kubelet[778]: E1101 11:57:48.763057     778 projected.go:198] Error preparing data for projected volume kube-api-access-42lnc for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-nhfb8: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 11:57:48 old-k8s-version-952358 kubelet[778]: E1101 11:57:48.763190     778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6cd70f9-cc0d-4ddf-9438-3f717d09de5d-kube-api-access-42lnc podName:b6cd70f9-cc0d-4ddf-9438-3f717d09de5d nodeName:}" failed. No retries permitted until 2025-11-01 11:57:49.263166689 +0000 UTC m=+19.419123396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-42lnc" (UniqueName: "kubernetes.io/projected/b6cd70f9-cc0d-4ddf-9438-3f717d09de5d-kube-api-access-42lnc") pod "kubernetes-dashboard-8694d4445c-nhfb8" (UID: "b6cd70f9-cc0d-4ddf-9438-3f717d09de5d") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 11:57:49 old-k8s-version-952358 kubelet[778]: W1101 11:57:49.638532     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/crio-edbd8683ccecc993aaab2a065bafebce5b4d2335d3b4990d14de07f322e71914 WatchSource:0}: Error finding container edbd8683ccecc993aaab2a065bafebce5b4d2335d3b4990d14de07f322e71914: Status 404 returned error can't find the container with id edbd8683ccecc993aaab2a065bafebce5b4d2335d3b4990d14de07f322e71914
	Nov 01 11:57:49 old-k8s-version-952358 kubelet[778]: W1101 11:57:49.668795     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5af3c19b6c5749276816c484d142f80cc27aacce5e295232472acd526f9d0431/crio-73c58832ed164e25a380c7b9a360b70a63f6a6357e6987a2659b4b8823a65710 WatchSource:0}: Error finding container 73c58832ed164e25a380c7b9a360b70a63f6a6357e6987a2659b4b8823a65710: Status 404 returned error can't find the container with id 73c58832ed164e25a380c7b9a360b70a63f6a6357e6987a2659b4b8823a65710
	Nov 01 11:57:54 old-k8s-version-952358 kubelet[778]: I1101 11:57:54.195158     778 scope.go:117] "RemoveContainer" containerID="20f22c04e3d9401d26dfe124953b292e2fe7a51d21d61b0e9183ab72f1e5256a"
	Nov 01 11:57:55 old-k8s-version-952358 kubelet[778]: I1101 11:57:55.198730     778 scope.go:117] "RemoveContainer" containerID="20f22c04e3d9401d26dfe124953b292e2fe7a51d21d61b0e9183ab72f1e5256a"
	Nov 01 11:57:55 old-k8s-version-952358 kubelet[778]: I1101 11:57:55.198930     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:57:55 old-k8s-version-952358 kubelet[778]: E1101 11:57:55.199205     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:57:56 old-k8s-version-952358 kubelet[778]: I1101 11:57:56.207332     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:57:56 old-k8s-version-952358 kubelet[778]: E1101 11:57:56.207604     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:57:59 old-k8s-version-952358 kubelet[778]: I1101 11:57:59.619786     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:57:59 old-k8s-version-952358 kubelet[778]: E1101 11:57:59.620094     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:58:06 old-k8s-version-952358 kubelet[778]: I1101 11:58:06.279143     778 scope.go:117] "RemoveContainer" containerID="f807a58e116c1b5abd957b7ad73b4e5c5a22ce7eb21839a6557e000c7c9bc9dc"
	Nov 01 11:58:06 old-k8s-version-952358 kubelet[778]: I1101 11:58:06.326970     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-nhfb8" podStartSLOduration=10.519729975 podCreationTimestamp="2025-11-01 11:57:47 +0000 UTC" firstStartedPulling="2025-11-01 11:57:49.672315704 +0000 UTC m=+19.828272411" lastFinishedPulling="2025-11-01 11:57:58.470927398 +0000 UTC m=+28.626884105" observedRunningTime="2025-11-01 11:57:59.230703338 +0000 UTC m=+29.386660053" watchObservedRunningTime="2025-11-01 11:58:06.318341669 +0000 UTC m=+36.474298375"
	Nov 01 11:58:12 old-k8s-version-952358 kubelet[778]: I1101 11:58:12.018416     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:58:12 old-k8s-version-952358 kubelet[778]: I1101 11:58:12.298280     778 scope.go:117] "RemoveContainer" containerID="3e46b828c3dc2169218811a57ce9f3b8f4251bff63092efda6f424924ebb2f58"
	Nov 01 11:58:12 old-k8s-version-952358 kubelet[778]: I1101 11:58:12.298485     778 scope.go:117] "RemoveContainer" containerID="badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64"
	Nov 01 11:58:12 old-k8s-version-952358 kubelet[778]: E1101 11:58:12.298763     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:58:19 old-k8s-version-952358 kubelet[778]: I1101 11:58:19.619069     778 scope.go:117] "RemoveContainer" containerID="badf1228f71bd4d5c2c3de0661ec1029c5ea05dd2beb7a2eaeb6a8bf615b3a64"
	Nov 01 11:58:19 old-k8s-version-952358 kubelet[778]: E1101 11:58:19.619850     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xn7cd_kubernetes-dashboard(a4b5817a-daa8-4799-b23a-f20e396bb08b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xn7cd" podUID="a4b5817a-daa8-4799-b23a-f20e396bb08b"
	Nov 01 11:58:28 old-k8s-version-952358 kubelet[778]: I1101 11:58:28.715683     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 11:58:28 old-k8s-version-952358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 11:58:28 old-k8s-version-952358 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 11:58:28 old-k8s-version-952358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [64a312d9f4c53e1433cc7d19282cc174c8a3ba911a400b3129ce54fe724fd5ba] <==
	2025/11/01 11:57:58 Using namespace: kubernetes-dashboard
	2025/11/01 11:57:58 Using in-cluster config to connect to apiserver
	2025/11/01 11:57:58 Using secret token for csrf signing
	2025/11/01 11:57:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 11:57:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 11:57:58 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 11:57:58 Generating JWE encryption key
	2025/11/01 11:57:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 11:57:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 11:57:58 Initializing JWE encryption key from synchronized object
	2025/11/01 11:57:58 Creating in-cluster Sidecar client
	2025/11/01 11:57:58 Serving insecurely on HTTP port: 9090
	2025/11/01 11:57:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 11:58:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 11:57:58 Starting overwatch
	
	
	==> storage-provisioner [095e20aeaa7c4e797b3b245447eb78281f018d603dfb8a3b04200dd8864113ff] <==
	I1101 11:58:06.349419       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 11:58:06.364298       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 11:58:06.364438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 11:58:23.767037       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 11:58:23.767215       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-952358_6d4f130f-9148-476c-917b-38a958fd9a9d!
	I1101 11:58:23.767714       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5ce5b6b-8d31-4770-8329-c46e139ecfe3", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-952358_6d4f130f-9148-476c-917b-38a958fd9a9d became leader
	I1101 11:58:23.867499       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-952358_6d4f130f-9148-476c-917b-38a958fd9a9d!
	
	
	==> storage-provisioner [f807a58e116c1b5abd957b7ad73b4e5c5a22ce7eb21839a6557e000c7c9bc9dc] <==
	I1101 11:57:35.563945       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 11:58:05.566170       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-952358 -n old-k8s-version-952358
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-952358 -n old-k8s-version-952358: exit status 2 (405.345207ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-952358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.394588ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:00:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-198717 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-198717 describe deploy/metrics-server -n kube-system: exit status 1 (96.845779ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-198717 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-198717
helpers_test.go:243: (dbg) docker inspect no-preload-198717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d",
	        "Created": "2025-11-01T11:58:39.349581274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 721381,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:58:39.477640895Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/hosts",
	        "LogPath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d-json.log",
	        "Name": "/no-preload-198717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-198717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-198717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d",
	                "LowerDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-198717",
	                "Source": "/var/lib/docker/volumes/no-preload-198717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-198717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-198717",
	                "name.minikube.sigs.k8s.io": "no-preload-198717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1009806accd3c20e73f9d83f8bc4b6b3577fb2e573e687d5e8bc568378ea3c7e",
	            "SandboxKey": "/var/run/docker/netns/1009806accd3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33789"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33787"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33788"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-198717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:5e:d6:b9:d0:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "984f332f5e0d4cc9526af8fdf6f1a1ce27a9c2697f377b762d5103dc82663350",
	                    "EndpointID": "4c81341fe7fbf944d8d3778b30545b27f999b6f695a5f3ce4672ede665f30415",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-198717",
	                        "c52fbb51f4c4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-198717 -n no-preload-198717
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-198717 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-198717 logs -n 25: (1.247570606s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-507511 sudo crio config                                                                                                                                                                                                             │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │                     │
	│ delete  │ -p cilium-507511                                                                                                                                                                                                                              │ cilium-507511             │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p force-systemd-env-857548 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-857548  │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ force-systemd-flag-643844 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-643844 │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ delete  │ -p force-systemd-flag-643844                                                                                                                                                                                                                  │ force-systemd-flag-643844 │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-534694    │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p force-systemd-env-857548                                                                                                                                                                                                                   │ force-systemd-env-857548  │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p cert-options-505831 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ cert-options-505831 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ -p cert-options-505831 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p cert-options-505831                                                                                                                                                                                                                        │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │                     │
	│ stop    │ -p old-k8s-version-952358 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-952358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:58 UTC │
	│ image   │ old-k8s-version-952358 image list --format=json                                                                                                                                                                                               │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ pause   │ -p old-k8s-version-952358 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │                     │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-534694    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717         │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ delete  │ -p cert-expiration-534694                                                                                                                                                                                                                     │ cert-expiration-534694    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860        │ jenkins │ v1.37.0 │ 01 Nov 25 11:59 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-198717         │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:59:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:59:01.718395  724423 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:59:01.718864  724423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:59:01.718915  724423 out.go:374] Setting ErrFile to fd 2...
	I1101 11:59:01.718953  724423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:59:01.719302  724423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:59:01.719931  724423 out.go:368] Setting JSON to false
	I1101 11:59:01.721226  724423 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13291,"bootTime":1761985051,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:59:01.721346  724423 start.go:143] virtualization:  
	I1101 11:59:01.727001  724423 out.go:179] * [embed-certs-816860] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:59:01.730563  724423 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:59:01.730634  724423 notify.go:221] Checking for updates...
	I1101 11:59:01.737573  724423 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:59:01.741074  724423 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:59:01.744300  724423 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:59:01.747523  724423 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:59:01.750712  724423 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:59:01.754367  724423 config.go:182] Loaded profile config "no-preload-198717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:59:01.754560  724423 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:59:01.803011  724423 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:59:01.803184  724423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:59:01.918985  724423 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-01 11:59:01.905554936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:59:01.919121  724423 docker.go:319] overlay module found
	I1101 11:59:01.922444  724423 out.go:179] * Using the docker driver based on user configuration
	I1101 11:59:01.925440  724423 start.go:309] selected driver: docker
	I1101 11:59:01.925459  724423 start.go:930] validating driver "docker" against <nil>
	I1101 11:59:01.925473  724423 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:59:01.926260  724423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:59:02.023649  724423 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-01 11:59:02.01331055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:59:02.023806  724423 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 11:59:02.024031  724423 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:59:02.027003  724423 out.go:179] * Using Docker driver with root privileges
	I1101 11:59:02.029917  724423 cni.go:84] Creating CNI manager for ""
	I1101 11:59:02.030006  724423 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:59:02.030021  724423 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 11:59:02.030103  724423 start.go:353] cluster config:
	{Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:59:02.033169  724423 out.go:179] * Starting "embed-certs-816860" primary control-plane node in "embed-certs-816860" cluster
	I1101 11:59:02.036157  724423 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 11:59:02.039005  724423 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:59:02.041829  724423 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:59:02.041888  724423 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 11:59:02.041898  724423 cache.go:59] Caching tarball of preloaded images
	I1101 11:59:02.041990  724423 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 11:59:02.042000  724423 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:59:02.042110  724423 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/config.json ...
	I1101 11:59:02.042128  724423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/config.json: {Name:mk2194dd3002c430ecdf0654b12b7bdb7effa738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:02.042285  724423 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:59:02.090781  724423 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:59:02.090799  724423 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:59:02.090812  724423 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:59:02.090833  724423 start.go:360] acquireMachinesLock for embed-certs-816860: {Name:mkc466573abafda4e2b4a3754427ac01b3fcf9c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:59:02.090936  724423 start.go:364] duration metric: took 87.435µs to acquireMachinesLock for "embed-certs-816860"
	I1101 11:59:02.090962  724423 start.go:93] Provisioning new machine with config: &{Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:59:02.091038  724423 start.go:125] createHost starting for "" (driver="docker")
	I1101 11:58:59.297266  720939 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.077794916s)
	I1101 11:58:59.297289  720939 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1101 11:58:59.297306  720939 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 11:58:59.297382  720939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 11:58:59.297451  720939 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.078181916s)
	I1101 11:58:59.297467  720939 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1101 11:58:59.297482  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1101 11:59:01.239464  720939 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.942059398s)
	I1101 11:59:01.239487  720939 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1101 11:59:01.239505  720939 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1101 11:59:01.239561  720939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1101 11:59:02.094571  724423 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 11:59:02.094823  724423 start.go:159] libmachine.API.Create for "embed-certs-816860" (driver="docker")
	I1101 11:59:02.094860  724423 client.go:173] LocalClient.Create starting
	I1101 11:59:02.094947  724423 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 11:59:02.094987  724423 main.go:143] libmachine: Decoding PEM data...
	I1101 11:59:02.095000  724423 main.go:143] libmachine: Parsing certificate...
	I1101 11:59:02.095058  724423 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 11:59:02.095090  724423 main.go:143] libmachine: Decoding PEM data...
	I1101 11:59:02.095104  724423 main.go:143] libmachine: Parsing certificate...
	I1101 11:59:02.095479  724423 cli_runner.go:164] Run: docker network inspect embed-certs-816860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 11:59:02.123618  724423 cli_runner.go:211] docker network inspect embed-certs-816860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 11:59:02.123718  724423 network_create.go:284] running [docker network inspect embed-certs-816860] to gather additional debugging logs...
	I1101 11:59:02.123737  724423 cli_runner.go:164] Run: docker network inspect embed-certs-816860
	W1101 11:59:02.138694  724423 cli_runner.go:211] docker network inspect embed-certs-816860 returned with exit code 1
	I1101 11:59:02.138719  724423 network_create.go:287] error running [docker network inspect embed-certs-816860]: docker network inspect embed-certs-816860: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-816860 not found
	I1101 11:59:02.138732  724423 network_create.go:289] output of [docker network inspect embed-certs-816860]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-816860 not found
	
	** /stderr **
	I1101 11:59:02.138838  724423 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:59:02.154190  724423 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fad877b9a6cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:a4:0d:8c:c4:a0} reservation:<nil>}
	I1101 11:59:02.154493  724423 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f319e39f8d0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:35:a5:64:2d:20} reservation:<nil>}
	I1101 11:59:02.154818  724423 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce7deea9bf12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:09:be:7b:bb:7b} reservation:<nil>}
	I1101 11:59:02.155245  724423 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cbba0}
	I1101 11:59:02.155268  724423 network_create.go:124] attempt to create docker network embed-certs-816860 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 11:59:02.155326  724423 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-816860 embed-certs-816860
	I1101 11:59:02.231017  724423 network_create.go:108] docker network embed-certs-816860 192.168.76.0/24 created
	I1101 11:59:02.231045  724423 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-816860" container
	I1101 11:59:02.231118  724423 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 11:59:02.251435  724423 cli_runner.go:164] Run: docker volume create embed-certs-816860 --label name.minikube.sigs.k8s.io=embed-certs-816860 --label created_by.minikube.sigs.k8s.io=true
	I1101 11:59:02.275634  724423 oci.go:103] Successfully created a docker volume embed-certs-816860
	I1101 11:59:02.275735  724423 cli_runner.go:164] Run: docker run --rm --name embed-certs-816860-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-816860 --entrypoint /usr/bin/test -v embed-certs-816860:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 11:59:02.999374  724423 oci.go:107] Successfully prepared a docker volume embed-certs-816860
	I1101 11:59:02.999428  724423 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:59:02.999448  724423 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 11:59:02.999522  724423 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-816860:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 11:59:05.983836  720939 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.744256515s)
	I1101 11:59:05.983866  720939 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1101 11:59:05.983890  720939 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 11:59:05.983937  720939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 11:59:06.737523  720939 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 11:59:06.737554  720939 cache_images.go:125] Successfully loaded all cached images
	I1101 11:59:06.737560  720939 cache_images.go:94] duration metric: took 19.852075714s to LoadCachedImages
	I1101 11:59:06.737571  720939 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 11:59:06.737667  720939 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-198717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-198717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:59:06.737781  720939 ssh_runner.go:195] Run: crio config
	I1101 11:59:06.800950  720939 cni.go:84] Creating CNI manager for ""
	I1101 11:59:06.800971  720939 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:59:06.800989  720939 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:59:06.801020  720939 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-198717 NodeName:no-preload-198717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:59:06.801202  720939 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-198717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:59:06.801378  720939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:59:06.817991  720939 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1101 11:59:06.818058  720939 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1101 11:59:06.831437  720939 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1101 11:59:06.831937  720939 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1101 11:59:06.832164  720939 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1101 11:59:06.832251  720939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1101 11:59:06.836006  720939 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1101 11:59:06.836041  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1101 11:59:07.669211  720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:59:07.712459  720939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1101 11:59:07.716471  720939 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1101 11:59:07.716504  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1101 11:59:07.757457  720939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1101 11:59:07.789969  720939 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1101 11:59:07.790006  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1101 11:59:08.331963  720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:59:08.340422  720939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 11:59:08.355319  720939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:59:08.368765  720939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 11:59:08.383020  720939 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:59:08.387674  720939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:59:08.399143  720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:59:08.515534  720939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:59:08.534684  720939 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717 for IP: 192.168.85.2
	I1101 11:59:08.534708  720939 certs.go:195] generating shared ca certs ...
	I1101 11:59:08.534725  720939 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:08.534860  720939 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:59:08.534908  720939 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:59:08.534920  720939 certs.go:257] generating profile certs ...
	I1101 11:59:08.534989  720939 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.key
	I1101 11:59:08.535007  720939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt with IP's: []
	I1101 11:59:08.635139  720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt ...
	I1101 11:59:08.635175  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: {Name:mk985cf899166c60ce4300c0de4fc7c1c0c8c250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:08.635378  720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.key ...
	I1101 11:59:08.635392  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.key: {Name:mk89cae1e96c1b7f7f57355474fa310e140b305f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:08.635489  720939 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.key.5fa2dae3
	I1101 11:59:08.635505  720939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.crt.5fa2dae3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 11:59:09.448502  720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.crt.5fa2dae3 ...
	I1101 11:59:09.448532  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.crt.5fa2dae3: {Name:mkb4872c65e4aa12185acc0a7fd6576e138d7cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:09.448742  720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.key.5fa2dae3 ...
	I1101 11:59:09.448753  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.key.5fa2dae3: {Name:mk53aa25b996d27801042f54322342e4367823f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:09.448850  720939 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.crt.5fa2dae3 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.crt
	I1101 11:59:09.448927  720939 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.key.5fa2dae3 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.key
	I1101 11:59:09.448988  720939 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.key
	I1101 11:59:09.449010  720939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.crt with IP's: []
	I1101 11:59:09.741132  720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.crt ...
	I1101 11:59:09.741169  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.crt: {Name:mkd153a0a0a3de019d4ffa24af4d9529e93acc75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:09.741358  720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.key ...
	I1101 11:59:09.741374  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.key: {Name:mk4d541e4d7b48dbd57b1e4fa0221aeb9f752a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:09.741577  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:59:09.741621  720939 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:59:09.741636  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:59:09.741661  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:59:09.741687  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:59:09.741739  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:59:09.741794  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:59:09.742372  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:59:09.763835  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:59:09.784523  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:59:09.803226  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:59:09.821566  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 11:59:09.840469  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:59:09.860022  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:59:09.879890  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 11:59:09.898910  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:59:09.923788  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:59:09.943805  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:59:09.964277  720939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:59:09.979857  720939 ssh_runner.go:195] Run: openssl version
	I1101 11:59:09.988686  720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:59:09.998706  720939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:59:10.004889  720939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:59:10.004985  720939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:59:10.049268  720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:59:10.059279  720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:59:10.070307  720939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:59:10.075293  720939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:59:10.075350  720939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:59:10.124993  720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:59:10.139459  720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:59:10.157568  720939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:59:10.164199  720939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:59:10.164268  720939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:59:10.227210  720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:59:10.256093  720939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:59:10.265216  720939 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:59:10.265269  720939 kubeadm.go:401] StartCluster: {Name:no-preload-198717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-198717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:59:10.265345  720939 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:59:10.265400  720939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:59:10.308334  720939 cri.go:89] found id: ""
	I1101 11:59:10.308438  720939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:59:10.321797  720939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:59:10.334145  720939 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 11:59:10.334213  720939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:59:10.348854  720939 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:59:10.348879  720939 kubeadm.go:158] found existing configuration files:
	
	I1101 11:59:10.348966  720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:59:10.359108  720939 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:59:10.359205  720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:59:10.368075  720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:59:10.378442  720939 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:59:10.378541  720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:59:10.389852  720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:59:10.400996  720939 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:59:10.401093  720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:59:10.419749  720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:59:10.432675  720939 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:59:10.432743  720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:59:10.443953  720939 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 11:59:10.546504  720939 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 11:59:10.546908  720939 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 11:59:10.666434  720939 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 11:59:10.666720  720939 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 11:59:10.666763  720939 kubeadm.go:319] OS: Linux
	I1101 11:59:10.666811  720939 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 11:59:10.666871  720939 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 11:59:10.666921  720939 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 11:59:10.666971  720939 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 11:59:10.667022  720939 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 11:59:10.667075  720939 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 11:59:10.667123  720939 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 11:59:10.667174  720939 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 11:59:10.667222  720939 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 11:59:10.784614  720939 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 11:59:10.785094  720939 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 11:59:10.785215  720939 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 11:59:10.812399  720939 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 11:59:09.974935  724423 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-816860:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.975369598s)
	I1101 11:59:09.974964  724423 kic.go:203] duration metric: took 6.975511861s to extract preloaded images to volume ...
	W1101 11:59:09.975097  724423 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 11:59:09.975206  724423 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 11:59:10.074856  724423 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-816860 --name embed-certs-816860 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-816860 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-816860 --network embed-certs-816860 --ip 192.168.76.2 --volume embed-certs-816860:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 11:59:10.421511  724423 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Running}}
	I1101 11:59:10.445933  724423 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 11:59:10.474191  724423 cli_runner.go:164] Run: docker exec embed-certs-816860 stat /var/lib/dpkg/alternatives/iptables
	I1101 11:59:10.534755  724423 oci.go:144] the created container "embed-certs-816860" has a running status.
	I1101 11:59:10.534794  724423 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa...
	I1101 11:59:10.870008  724423 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 11:59:10.897299  724423 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 11:59:10.924328  724423 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 11:59:10.924352  724423 kic_runner.go:114] Args: [docker exec --privileged embed-certs-816860 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 11:59:11.018383  724423 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 11:59:11.045680  724423 machine.go:94] provisionDockerMachine start ...
	I1101 11:59:11.045807  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:11.077590  724423 main.go:143] libmachine: Using SSH client type: native
	I1101 11:59:11.078241  724423 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33790 <nil> <nil>}
	I1101 11:59:11.078362  724423 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:59:11.079220  724423 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58086->127.0.0.1:33790: read: connection reset by peer
	I1101 11:59:10.818989  720939 out.go:252]   - Generating certificates and keys ...
	I1101 11:59:10.819088  720939 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 11:59:10.819158  720939 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 11:59:12.066272  720939 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 11:59:14.258406  724423 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-816860
	
	I1101 11:59:14.258450  724423 ubuntu.go:182] provisioning hostname "embed-certs-816860"
	I1101 11:59:14.258524  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:14.281928  724423 main.go:143] libmachine: Using SSH client type: native
	I1101 11:59:14.282246  724423 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33790 <nil> <nil>}
	I1101 11:59:14.282263  724423 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-816860 && echo "embed-certs-816860" | sudo tee /etc/hostname
	I1101 11:59:14.448587  724423 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-816860
	
	I1101 11:59:14.448757  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:14.469104  724423 main.go:143] libmachine: Using SSH client type: native
	I1101 11:59:14.469420  724423 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33790 <nil> <nil>}
	I1101 11:59:14.469437  724423 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-816860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-816860/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-816860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:59:14.626258  724423 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:59:14.626346  724423 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 11:59:14.626404  724423 ubuntu.go:190] setting up certificates
	I1101 11:59:14.626437  724423 provision.go:84] configureAuth start
	I1101 11:59:14.626548  724423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-816860
	I1101 11:59:14.648925  724423 provision.go:143] copyHostCerts
	I1101 11:59:14.649002  724423 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 11:59:14.649018  724423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 11:59:14.649096  724423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 11:59:14.649204  724423 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 11:59:14.649215  724423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 11:59:14.649244  724423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 11:59:14.649346  724423 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 11:59:14.649357  724423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 11:59:14.649426  724423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 11:59:14.649500  724423 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.embed-certs-816860 san=[127.0.0.1 192.168.76.2 embed-certs-816860 localhost minikube]
	I1101 11:59:14.890961  724423 provision.go:177] copyRemoteCerts
	I1101 11:59:14.891036  724423 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:59:14.891083  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:14.912513  724423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33790 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 11:59:15.030384  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 11:59:15.064454  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 11:59:15.090274  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:59:15.110532  724423 provision.go:87] duration metric: took 484.05312ms to configureAuth
	I1101 11:59:15.110566  724423 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:59:15.110794  724423 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:59:15.110964  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:15.133207  724423 main.go:143] libmachine: Using SSH client type: native
	I1101 11:59:15.133544  724423 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33790 <nil> <nil>}
	I1101 11:59:15.133570  724423 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:59:15.408796  724423 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:59:15.408844  724423 machine.go:97] duration metric: took 4.36313103s to provisionDockerMachine
	I1101 11:59:15.408855  724423 client.go:176] duration metric: took 13.313988317s to LocalClient.Create
	I1101 11:59:15.408870  724423 start.go:167] duration metric: took 13.314048304s to libmachine.API.Create "embed-certs-816860"
	I1101 11:59:15.408878  724423 start.go:293] postStartSetup for "embed-certs-816860" (driver="docker")
	I1101 11:59:15.408899  724423 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:59:15.408976  724423 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:59:15.409023  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:15.427280  724423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33790 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 11:59:15.534991  724423 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:59:15.538978  724423 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:59:15.539005  724423 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:59:15.539017  724423 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 11:59:15.539074  724423 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 11:59:15.539155  724423 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 11:59:15.539261  724423 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:59:15.547659  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:59:15.568229  724423 start.go:296] duration metric: took 159.324443ms for postStartSetup
	I1101 11:59:15.568586  724423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-816860
	I1101 11:59:15.587900  724423 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/config.json ...
	I1101 11:59:15.588197  724423 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:59:15.588237  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:15.607660  724423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33790 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 11:59:15.710698  724423 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:59:15.718142  724423 start.go:128] duration metric: took 13.627088971s to createHost
	I1101 11:59:15.718164  724423 start.go:83] releasing machines lock for "embed-certs-816860", held for 13.627219812s
	I1101 11:59:15.718249  724423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-816860
	I1101 11:59:15.735370  724423 ssh_runner.go:195] Run: cat /version.json
	I1101 11:59:15.735415  724423 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:59:15.735478  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:15.735421  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:15.761376  724423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33790 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 11:59:15.777868  724423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33790 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 11:59:16.004318  724423 ssh_runner.go:195] Run: systemctl --version
	I1101 11:59:16.012323  724423 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:59:16.057974  724423 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:59:16.063107  724423 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:59:16.063252  724423 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:59:16.095291  724423 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 11:59:16.095361  724423 start.go:496] detecting cgroup driver to use...
	I1101 11:59:16.095415  724423 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:59:16.095509  724423 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:59:16.119866  724423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:59:16.136112  724423 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:59:16.136174  724423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:59:16.155567  724423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:59:16.178246  724423 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:59:16.332297  724423 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:59:16.517183  724423 docker.go:234] disabling docker service ...
	I1101 11:59:16.517296  724423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:59:16.551050  724423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:59:16.566866  724423 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:59:16.719042  724423 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:59:16.865401  724423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:59:16.880471  724423 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:59:16.894321  724423 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:59:16.894463  724423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:59:16.903022  724423 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:59:16.903088  724423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:59:16.912208  724423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:59:16.921455  724423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:59:16.930598  724423 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:59:16.939093  724423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:59:16.948408  724423 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:59:16.962882  724423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:59:16.972520  724423 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:59:16.981037  724423 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:59:16.989226  724423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:59:17.145375  724423 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:59:17.300367  724423 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:59:17.300507  724423 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:59:17.306086  724423 start.go:564] Will wait 60s for crictl version
	I1101 11:59:17.306204  724423 ssh_runner.go:195] Run: which crictl
	I1101 11:59:17.310983  724423 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:59:17.345669  724423 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 11:59:17.345858  724423 ssh_runner.go:195] Run: crio --version
	I1101 11:59:17.380278  724423 ssh_runner.go:195] Run: crio --version
	I1101 11:59:17.429717  724423 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 11:59:12.934285  720939 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 11:59:13.355070  720939 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 11:59:13.566627  720939 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 11:59:13.964301  720939 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 11:59:13.964655  720939 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-198717] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 11:59:15.589598  720939 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 11:59:15.589819  720939 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-198717] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 11:59:16.035779  720939 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 11:59:16.347160  720939 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 11:59:16.841348  720939 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 11:59:16.841942  720939 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 11:59:17.130159  720939 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 11:59:17.974088  720939 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 11:59:18.374902  720939 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 11:59:18.709283  720939 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 11:59:18.943948  720939 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 11:59:18.945290  720939 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 11:59:18.949157  720939 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 11:59:17.432736  724423 cli_runner.go:164] Run: docker network inspect embed-certs-816860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:59:17.455281  724423 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 11:59:17.459304  724423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:59:17.470451  724423 kubeadm.go:884] updating cluster {Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:59:17.470574  724423 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:59:17.470640  724423 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:59:17.524414  724423 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:59:17.524439  724423 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:59:17.524539  724423 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:59:17.552556  724423 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:59:17.552575  724423 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:59:17.552583  724423 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 11:59:17.552666  724423 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-816860 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:59:17.552747  724423 ssh_runner.go:195] Run: crio config
	I1101 11:59:17.629482  724423 cni.go:84] Creating CNI manager for ""
	I1101 11:59:17.629552  724423 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:59:17.629585  724423 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:59:17.629639  724423 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-816860 NodeName:embed-certs-816860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:59:17.629834  724423 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-816860"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:59:17.629950  724423 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:59:17.638011  724423 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:59:17.638129  724423 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:59:17.645862  724423 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 11:59:17.660215  724423 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:59:17.673740  724423 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 11:59:17.686932  724423 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:59:17.691072  724423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:59:17.700766  724423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:59:17.837629  724423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:59:17.862115  724423 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860 for IP: 192.168.76.2
	I1101 11:59:17.862137  724423 certs.go:195] generating shared ca certs ...
	I1101 11:59:17.862154  724423 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:17.862287  724423 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 11:59:17.862339  724423 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 11:59:17.862351  724423 certs.go:257] generating profile certs ...
	I1101 11:59:17.862406  724423 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/client.key
	I1101 11:59:17.862423  724423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/client.crt with IP's: []
	I1101 11:59:19.045989  724423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/client.crt ...
	I1101 11:59:19.046022  724423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/client.crt: {Name:mk8cdccc9c25dd25b1d33652305a25f8f8b9848b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:19.046293  724423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/client.key ...
	I1101 11:59:19.046310  724423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/client.key: {Name:mka8f889819ced4b682078b7337b77cdfb89fc89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:19.046479  724423 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key.a2d2a5ad
	I1101 11:59:19.046502  724423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.crt.a2d2a5ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 11:59:19.235729  724423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.crt.a2d2a5ad ...
	I1101 11:59:19.235761  724423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.crt.a2d2a5ad: {Name:mk1642384509a6e234113ddc49cac360f4e68b08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:19.235922  724423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key.a2d2a5ad ...
	I1101 11:59:19.235938  724423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key.a2d2a5ad: {Name:mkb68b2d9a8bfb1f9056eaa779d427cb931b4ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:19.236014  724423 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.crt.a2d2a5ad -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.crt
	I1101 11:59:19.236101  724423 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key.a2d2a5ad -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key
	I1101 11:59:19.236161  724423 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.key
	I1101 11:59:19.236179  724423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.crt with IP's: []
	I1101 11:59:19.760292  724423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.crt ...
	I1101 11:59:19.760325  724423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.crt: {Name:mk0f2150a5a90f8a0593e7a44ddebf816a657302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:19.760506  724423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.key ...
	I1101 11:59:19.760524  724423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.key: {Name:mka93acc08a68420947becc6395b03ff053acb6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:19.760710  724423 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 11:59:19.760759  724423 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 11:59:19.760780  724423 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:59:19.760809  724423 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 11:59:19.760836  724423 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:59:19.760865  724423 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 11:59:19.760910  724423 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 11:59:19.761505  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:59:19.791693  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:59:19.811225  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:59:19.830852  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 11:59:19.853292  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 11:59:19.878422  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:59:19.906074  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:59:19.931752  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:59:19.959404  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 11:59:19.984219  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 11:59:20.004556  724423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:59:20.033255  724423 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:59:20.049787  724423 ssh_runner.go:195] Run: openssl version
	I1101 11:59:20.056697  724423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:59:20.065904  724423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:59:20.070216  724423 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:59:20.070297  724423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:59:20.114427  724423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:59:20.123350  724423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 11:59:20.132888  724423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 11:59:20.137079  724423 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 11:59:20.137197  724423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 11:59:20.179037  724423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 11:59:20.188878  724423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 11:59:20.197673  724423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 11:59:20.201524  724423 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 11:59:20.201630  724423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 11:59:20.242573  724423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:59:20.251287  724423 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:59:20.254972  724423 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:59:20.255059  724423 kubeadm.go:401] StartCluster: {Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:59:20.255162  724423 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:59:20.255263  724423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:59:20.284362  724423 cri.go:89] found id: ""
	I1101 11:59:20.284483  724423 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:59:20.293049  724423 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:59:20.301368  724423 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 11:59:20.301497  724423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:59:20.309196  724423 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:59:20.309262  724423 kubeadm.go:158] found existing configuration files:
	
	I1101 11:59:20.309321  724423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:59:20.316997  724423 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:59:20.317099  724423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:59:20.324640  724423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:59:20.332712  724423 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:59:20.332784  724423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:59:20.340394  724423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:59:20.348191  724423 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:59:20.348285  724423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:59:20.355519  724423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:59:20.363334  724423 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:59:20.363407  724423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:59:20.370689  724423 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 11:59:20.457529  724423 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 11:59:20.464028  724423 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 11:59:20.508434  724423 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 11:59:20.508512  724423 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 11:59:20.508554  724423 kubeadm.go:319] OS: Linux
	I1101 11:59:20.508606  724423 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 11:59:20.508661  724423 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 11:59:20.508715  724423 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 11:59:20.508769  724423 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 11:59:20.508825  724423 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 11:59:20.508879  724423 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 11:59:20.508931  724423 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 11:59:20.508986  724423 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 11:59:20.509034  724423 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 11:59:20.613588  724423 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 11:59:20.613733  724423 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 11:59:20.613835  724423 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 11:59:20.629362  724423 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 11:59:20.635044  724423 out.go:252]   - Generating certificates and keys ...
	I1101 11:59:20.635161  724423 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 11:59:20.635238  724423 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 11:59:21.130054  724423 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 11:59:21.478044  724423 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 11:59:18.952877  720939 out.go:252]   - Booting up control plane ...
	I1101 11:59:18.952984  720939 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 11:59:18.955727  720939 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 11:59:18.959688  720939 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 11:59:19.009668  720939 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 11:59:19.009820  720939 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 11:59:19.018338  720939 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 11:59:19.020384  720939 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 11:59:19.023821  720939 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 11:59:19.176058  720939 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 11:59:19.176184  720939 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 11:59:20.178056  720939 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001988222s
	I1101 11:59:20.183297  720939 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 11:59:20.183714  720939 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 11:59:20.184110  720939 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 11:59:20.185068  720939 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 11:59:21.755278  724423 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 11:59:22.019134  724423 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 11:59:22.405167  724423 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 11:59:22.405775  724423 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-816860 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 11:59:22.963947  724423 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 11:59:22.964536  724423 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-816860 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 11:59:23.351613  724423 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 11:59:24.291663  724423 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 11:59:24.621316  724423 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 11:59:24.621881  724423 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 11:59:25.828797  724423 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 11:59:26.542045  724423 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 11:59:26.736022  724423 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 11:59:26.996853  724423 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 11:59:27.081520  724423 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 11:59:27.082724  724423 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 11:59:27.085877  724423 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 11:59:25.482569  720939 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.297072201s
	I1101 11:59:28.913605  720939 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.72777863s
	I1101 11:59:29.187037  720939 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.002518634s
	I1101 11:59:29.208657  720939 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 11:59:29.224835  720939 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 11:59:29.266590  720939 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 11:59:29.266801  720939 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-198717 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 11:59:29.288989  720939 kubeadm.go:319] [bootstrap-token] Using token: 1b6i03.h1h6po63ye3azh6f
	I1101 11:59:29.292407  720939 out.go:252]   - Configuring RBAC rules ...
	I1101 11:59:29.292535  720939 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 11:59:29.301300  720939 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 11:59:29.318783  720939 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 11:59:29.327059  720939 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 11:59:29.333497  720939 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 11:59:29.340429  720939 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 11:59:29.594948  720939 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 11:59:30.064673  720939 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 11:59:30.599802  720939 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 11:59:30.601258  720939 kubeadm.go:319] 
	I1101 11:59:30.601334  720939 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 11:59:30.601344  720939 kubeadm.go:319] 
	I1101 11:59:30.601421  720939 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 11:59:30.601431  720939 kubeadm.go:319] 
	I1101 11:59:30.601456  720939 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 11:59:30.601518  720939 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 11:59:30.601579  720939 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 11:59:30.601587  720939 kubeadm.go:319] 
	I1101 11:59:30.601641  720939 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 11:59:30.601649  720939 kubeadm.go:319] 
	I1101 11:59:30.601728  720939 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 11:59:30.601738  720939 kubeadm.go:319] 
	I1101 11:59:30.601790  720939 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 11:59:30.601875  720939 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 11:59:30.601952  720939 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 11:59:30.601963  720939 kubeadm.go:319] 
	I1101 11:59:30.602047  720939 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 11:59:30.602128  720939 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 11:59:30.602137  720939 kubeadm.go:319] 
	I1101 11:59:30.602221  720939 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1b6i03.h1h6po63ye3azh6f \
	I1101 11:59:30.602331  720939 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 11:59:30.602355  720939 kubeadm.go:319] 	--control-plane 
	I1101 11:59:30.602363  720939 kubeadm.go:319] 
	I1101 11:59:30.602447  720939 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 11:59:30.602456  720939 kubeadm.go:319] 
	I1101 11:59:30.602536  720939 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1b6i03.h1h6po63ye3azh6f \
	I1101 11:59:30.602641  720939 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 11:59:30.608916  720939 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 11:59:30.609154  720939 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 11:59:30.609260  720939 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 11:59:30.609281  720939 cni.go:84] Creating CNI manager for ""
	I1101 11:59:30.609288  720939 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:59:30.612418  720939 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 11:59:27.089413  724423 out.go:252]   - Booting up control plane ...
	I1101 11:59:27.089526  724423 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 11:59:27.089609  724423 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 11:59:27.090662  724423 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 11:59:27.107785  724423 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 11:59:27.107899  724423 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 11:59:27.117565  724423 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 11:59:27.127389  724423 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 11:59:27.127846  724423 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 11:59:27.340720  724423 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 11:59:27.340845  724423 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 11:59:28.841521  724423 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501131813s
	I1101 11:59:28.852813  724423 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 11:59:28.852920  724423 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 11:59:28.853014  724423 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 11:59:28.853096  724423 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 11:59:30.615379  720939 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 11:59:30.622463  720939 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 11:59:30.622531  720939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 11:59:30.656352  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 11:59:31.187405  720939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:59:31.187538  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:31.187619  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-198717 minikube.k8s.io/updated_at=2025_11_01T11_59_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=no-preload-198717 minikube.k8s.io/primary=true
	I1101 11:59:31.756263  720939 ops.go:34] apiserver oom_adj: -16
	I1101 11:59:31.756368  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:32.256763  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:32.757336  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:33.257070  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:33.756860  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:34.256479  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:34.757079  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:34.871769  720939 kubeadm.go:1114] duration metric: took 3.684277071s to wait for elevateKubeSystemPrivileges
	I1101 11:59:34.871799  720939 kubeadm.go:403] duration metric: took 24.606534296s to StartCluster
	I1101 11:59:34.871816  720939 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:34.871880  720939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:59:34.872599  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:34.872832  720939 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:59:34.872933  720939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 11:59:34.873207  720939 config.go:182] Loaded profile config "no-preload-198717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:59:34.873256  720939 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:59:34.873329  720939 addons.go:70] Setting storage-provisioner=true in profile "no-preload-198717"
	I1101 11:59:34.873342  720939 addons.go:239] Setting addon storage-provisioner=true in "no-preload-198717"
	I1101 11:59:34.873370  720939 host.go:66] Checking if "no-preload-198717" exists ...
	I1101 11:59:34.873908  720939 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 11:59:34.874408  720939 addons.go:70] Setting default-storageclass=true in profile "no-preload-198717"
	I1101 11:59:34.874431  720939 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-198717"
	I1101 11:59:34.874748  720939 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 11:59:34.876397  720939 out.go:179] * Verifying Kubernetes components...
	I1101 11:59:34.881240  720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:59:34.919525  720939 addons.go:239] Setting addon default-storageclass=true in "no-preload-198717"
	I1101 11:59:34.919563  720939 host.go:66] Checking if "no-preload-198717" exists ...
	I1101 11:59:34.920028  720939 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 11:59:34.922182  720939 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:59:34.710046  724423 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.856021131s
	I1101 11:59:36.429585  724423 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.576955592s
	I1101 11:59:34.924970  720939 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:59:34.924990  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:59:34.925054  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 11:59:34.968002  720939 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:59:34.968023  720939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:59:34.968103  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 11:59:34.975420  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33785 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 11:59:35.003868  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33785 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 11:59:35.379717  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:59:35.468927  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:59:35.573551  720939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 11:59:35.573747  720939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:59:37.339066  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.959268171s)
	I1101 11:59:37.339121  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.870135493s)
	I1101 11:59:37.339446  720939 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.765648548s)
	I1101 11:59:37.340163  720939 node_ready.go:35] waiting up to 6m0s for node "no-preload-198717" to be "Ready" ...
	I1101 11:59:37.340407  720939 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.766781413s)
	I1101 11:59:37.340423  720939 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 11:59:37.418580  720939 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 11:59:37.421979  720939 addons.go:515] duration metric: took 2.548706046s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 11:59:38.355859  724423 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.502172862s
	I1101 11:59:38.382856  724423 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 11:59:38.398849  724423 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 11:59:38.416777  724423 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 11:59:38.417322  724423 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-816860 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 11:59:38.435342  724423 kubeadm.go:319] [bootstrap-token] Using token: 5ga2z8.9lj6296wzdr1k7rs
	I1101 11:59:38.438589  724423 out.go:252]   - Configuring RBAC rules ...
	I1101 11:59:38.438722  724423 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 11:59:38.447839  724423 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 11:59:38.460795  724423 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 11:59:38.468025  724423 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 11:59:38.473285  724423 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 11:59:38.478558  724423 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 11:59:38.766434  724423 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 11:59:39.377160  724423 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 11:59:39.766362  724423 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 11:59:39.767937  724423 kubeadm.go:319] 
	I1101 11:59:39.768012  724423 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 11:59:39.768018  724423 kubeadm.go:319] 
	I1101 11:59:39.768101  724423 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 11:59:39.768105  724423 kubeadm.go:319] 
	I1101 11:59:39.768150  724423 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 11:59:39.768638  724423 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 11:59:39.768698  724423 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 11:59:39.768704  724423 kubeadm.go:319] 
	I1101 11:59:39.768760  724423 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 11:59:39.768764  724423 kubeadm.go:319] 
	I1101 11:59:39.768814  724423 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 11:59:39.768819  724423 kubeadm.go:319] 
	I1101 11:59:39.768873  724423 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 11:59:39.768952  724423 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 11:59:39.769023  724423 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 11:59:39.769028  724423 kubeadm.go:319] 
	I1101 11:59:39.769344  724423 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 11:59:39.769431  724423 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 11:59:39.769437  724423 kubeadm.go:319] 
	I1101 11:59:39.769788  724423 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5ga2z8.9lj6296wzdr1k7rs \
	I1101 11:59:39.769961  724423 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 11:59:39.770202  724423 kubeadm.go:319] 	--control-plane 
	I1101 11:59:39.770212  724423 kubeadm.go:319] 
	I1101 11:59:39.770623  724423 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 11:59:39.770634  724423 kubeadm.go:319] 
	I1101 11:59:39.771020  724423 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5ga2z8.9lj6296wzdr1k7rs \
	I1101 11:59:39.771430  724423 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 11:59:39.778684  724423 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 11:59:39.778926  724423 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 11:59:39.779036  724423 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 11:59:39.779052  724423 cni.go:84] Creating CNI manager for ""
	I1101 11:59:39.779060  724423 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 11:59:39.782303  724423 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 11:59:39.785300  724423 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 11:59:39.795174  724423 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 11:59:39.795198  724423 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 11:59:39.856920  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 11:59:40.686042  724423 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:59:40.686187  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:40.686266  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-816860 minikube.k8s.io/updated_at=2025_11_01T11_59_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=embed-certs-816860 minikube.k8s.io/primary=true
	I1101 11:59:40.964311  724423 ops.go:34] apiserver oom_adj: -16
	I1101 11:59:40.964419  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:41.465118  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:37.868103  720939 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-198717" context rescaled to 1 replicas
	W1101 11:59:39.343237  720939 node_ready.go:57] node "no-preload-198717" has "Ready":"False" status (will retry)
	W1101 11:59:41.343616  720939 node_ready.go:57] node "no-preload-198717" has "Ready":"False" status (will retry)
	I1101 11:59:41.965503  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:42.465127  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:42.965364  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:43.465227  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:43.964760  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:44.465323  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:44.965124  724423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:59:45.111180  724423 kubeadm.go:1114] duration metric: took 4.425035919s to wait for elevateKubeSystemPrivileges
	I1101 11:59:45.111212  724423 kubeadm.go:403] duration metric: took 24.856157512s to StartCluster
	I1101 11:59:45.111230  724423 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:45.111310  724423 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:59:45.112788  724423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:59:45.113059  724423 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:59:45.113381  724423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 11:59:45.113832  724423 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:59:45.113875  724423 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:59:45.114129  724423 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-816860"
	I1101 11:59:45.114158  724423 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-816860"
	I1101 11:59:45.114191  724423 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 11:59:45.114746  724423 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 11:59:45.119102  724423 out.go:179] * Verifying Kubernetes components...
	I1101 11:59:45.141374  724423 addons.go:70] Setting default-storageclass=true in profile "embed-certs-816860"
	I1101 11:59:45.141425  724423 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-816860"
	I1101 11:59:45.141837  724423 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 11:59:45.143496  724423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:59:45.252517  724423 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:59:45.256530  724423 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:59:45.256561  724423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:59:45.256639  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:45.276904  724423 addons.go:239] Setting addon default-storageclass=true in "embed-certs-816860"
	I1101 11:59:45.276955  724423 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 11:59:45.277442  724423 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 11:59:45.296186  724423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33790 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 11:59:45.331186  724423 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:59:45.331213  724423 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:59:45.331294  724423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 11:59:45.374862  724423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33790 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 11:59:45.651170  724423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 11:59:45.651294  724423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:59:45.681046  724423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:59:45.758629  724423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:59:46.079542  724423 node_ready.go:35] waiting up to 6m0s for node "embed-certs-816860" to be "Ready" ...
	I1101 11:59:46.079867  724423 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 11:59:46.490736  724423 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 11:59:46.493637  724423 addons.go:515] duration metric: took 1.379743582s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 11:59:46.585585  724423 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-816860" context rescaled to 1 replicas
	W1101 11:59:43.843079  720939 node_ready.go:57] node "no-preload-198717" has "Ready":"False" status (will retry)
	W1101 11:59:46.343570  720939 node_ready.go:57] node "no-preload-198717" has "Ready":"False" status (will retry)
	W1101 11:59:48.083311  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	W1101 11:59:50.582994  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	W1101 11:59:48.344206  720939 node_ready.go:57] node "no-preload-198717" has "Ready":"False" status (will retry)
	W1101 11:59:50.843169  720939 node_ready.go:57] node "no-preload-198717" has "Ready":"False" status (will retry)
	I1101 11:59:51.344603  720939 node_ready.go:49] node "no-preload-198717" is "Ready"
	I1101 11:59:51.344633  720939 node_ready.go:38] duration metric: took 14.004454223s for node "no-preload-198717" to be "Ready" ...
	I1101 11:59:51.344647  720939 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:59:51.344712  720939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:59:51.358437  720939 api_server.go:72] duration metric: took 16.485565395s to wait for apiserver process to appear ...
	I1101 11:59:51.358459  720939 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:59:51.358479  720939 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 11:59:51.368233  720939 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 11:59:51.369443  720939 api_server.go:141] control plane version: v1.34.1
	I1101 11:59:51.369472  720939 api_server.go:131] duration metric: took 11.005418ms to wait for apiserver health ...
	I1101 11:59:51.369482  720939 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:59:51.372536  720939 system_pods.go:59] 8 kube-system pods found
	I1101 11:59:51.372648  720939 system_pods.go:61] "coredns-66bc5c9577-s7p9w" [487ba34d-2e32-4d07-bcf2-d5ed1a59340b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:59:51.372665  720939 system_pods.go:61] "etcd-no-preload-198717" [254941a7-95dc-417f-97be-e3fce18cb3fa] Running
	I1101 11:59:51.372672  720939 system_pods.go:61] "kindnet-qnmmf" [f70495ad-543e-4581-98b7-9e82ba963087] Running
	I1101 11:59:51.372680  720939 system_pods.go:61] "kube-apiserver-no-preload-198717" [67548db9-5432-4574-bf12-b20ce6cafead] Running
	I1101 11:59:51.372685  720939 system_pods.go:61] "kube-controller-manager-no-preload-198717" [7865ffaf-26b0-4526-98a2-15c997a72dec] Running
	I1101 11:59:51.372716  720939 system_pods.go:61] "kube-proxy-tlh2v" [ded2c625-39aa-414d-b063-d523a28dd850] Running
	I1101 11:59:51.372737  720939 system_pods.go:61] "kube-scheduler-no-preload-198717" [610d540b-b744-4f68-881f-a9d00d06983d] Running
	I1101 11:59:51.372759  720939 system_pods.go:61] "storage-provisioner" [7242eed2-7588-463b-9906-b5289039fe17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:59:51.372771  720939 system_pods.go:74] duration metric: took 3.282939ms to wait for pod list to return data ...
	I1101 11:59:51.372785  720939 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:59:51.375740  720939 default_sa.go:45] found service account: "default"
	I1101 11:59:51.375764  720939 default_sa.go:55] duration metric: took 2.970287ms for default service account to be created ...
	I1101 11:59:51.375773  720939 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:59:51.378750  720939 system_pods.go:86] 8 kube-system pods found
	I1101 11:59:51.378787  720939 system_pods.go:89] "coredns-66bc5c9577-s7p9w" [487ba34d-2e32-4d07-bcf2-d5ed1a59340b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:59:51.378794  720939 system_pods.go:89] "etcd-no-preload-198717" [254941a7-95dc-417f-97be-e3fce18cb3fa] Running
	I1101 11:59:51.378800  720939 system_pods.go:89] "kindnet-qnmmf" [f70495ad-543e-4581-98b7-9e82ba963087] Running
	I1101 11:59:51.378804  720939 system_pods.go:89] "kube-apiserver-no-preload-198717" [67548db9-5432-4574-bf12-b20ce6cafead] Running
	I1101 11:59:51.378818  720939 system_pods.go:89] "kube-controller-manager-no-preload-198717" [7865ffaf-26b0-4526-98a2-15c997a72dec] Running
	I1101 11:59:51.378828  720939 system_pods.go:89] "kube-proxy-tlh2v" [ded2c625-39aa-414d-b063-d523a28dd850] Running
	I1101 11:59:51.378833  720939 system_pods.go:89] "kube-scheduler-no-preload-198717" [610d540b-b744-4f68-881f-a9d00d06983d] Running
	I1101 11:59:51.378841  720939 system_pods.go:89] "storage-provisioner" [7242eed2-7588-463b-9906-b5289039fe17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:59:51.378872  720939 retry.go:31] will retry after 264.134522ms: missing components: kube-dns
	I1101 11:59:51.652600  720939 system_pods.go:86] 8 kube-system pods found
	I1101 11:59:51.652644  720939 system_pods.go:89] "coredns-66bc5c9577-s7p9w" [487ba34d-2e32-4d07-bcf2-d5ed1a59340b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:59:51.652652  720939 system_pods.go:89] "etcd-no-preload-198717" [254941a7-95dc-417f-97be-e3fce18cb3fa] Running
	I1101 11:59:51.652658  720939 system_pods.go:89] "kindnet-qnmmf" [f70495ad-543e-4581-98b7-9e82ba963087] Running
	I1101 11:59:51.652663  720939 system_pods.go:89] "kube-apiserver-no-preload-198717" [67548db9-5432-4574-bf12-b20ce6cafead] Running
	I1101 11:59:51.652671  720939 system_pods.go:89] "kube-controller-manager-no-preload-198717" [7865ffaf-26b0-4526-98a2-15c997a72dec] Running
	I1101 11:59:51.652680  720939 system_pods.go:89] "kube-proxy-tlh2v" [ded2c625-39aa-414d-b063-d523a28dd850] Running
	I1101 11:59:51.652685  720939 system_pods.go:89] "kube-scheduler-no-preload-198717" [610d540b-b744-4f68-881f-a9d00d06983d] Running
	I1101 11:59:51.652694  720939 system_pods.go:89] "storage-provisioner" [7242eed2-7588-463b-9906-b5289039fe17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:59:51.652712  720939 retry.go:31] will retry after 251.231223ms: missing components: kube-dns
	I1101 11:59:51.908688  720939 system_pods.go:86] 8 kube-system pods found
	I1101 11:59:51.908724  720939 system_pods.go:89] "coredns-66bc5c9577-s7p9w" [487ba34d-2e32-4d07-bcf2-d5ed1a59340b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:59:51.908731  720939 system_pods.go:89] "etcd-no-preload-198717" [254941a7-95dc-417f-97be-e3fce18cb3fa] Running
	I1101 11:59:51.908738  720939 system_pods.go:89] "kindnet-qnmmf" [f70495ad-543e-4581-98b7-9e82ba963087] Running
	I1101 11:59:51.908744  720939 system_pods.go:89] "kube-apiserver-no-preload-198717" [67548db9-5432-4574-bf12-b20ce6cafead] Running
	I1101 11:59:51.908748  720939 system_pods.go:89] "kube-controller-manager-no-preload-198717" [7865ffaf-26b0-4526-98a2-15c997a72dec] Running
	I1101 11:59:51.908752  720939 system_pods.go:89] "kube-proxy-tlh2v" [ded2c625-39aa-414d-b063-d523a28dd850] Running
	I1101 11:59:51.908756  720939 system_pods.go:89] "kube-scheduler-no-preload-198717" [610d540b-b744-4f68-881f-a9d00d06983d] Running
	I1101 11:59:51.908772  720939 system_pods.go:89] "storage-provisioner" [7242eed2-7588-463b-9906-b5289039fe17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:59:51.908790  720939 retry.go:31] will retry after 432.407393ms: missing components: kube-dns
	I1101 11:59:52.344926  720939 system_pods.go:86] 8 kube-system pods found
	I1101 11:59:52.344966  720939 system_pods.go:89] "coredns-66bc5c9577-s7p9w" [487ba34d-2e32-4d07-bcf2-d5ed1a59340b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:59:52.344973  720939 system_pods.go:89] "etcd-no-preload-198717" [254941a7-95dc-417f-97be-e3fce18cb3fa] Running
	I1101 11:59:52.344979  720939 system_pods.go:89] "kindnet-qnmmf" [f70495ad-543e-4581-98b7-9e82ba963087] Running
	I1101 11:59:52.344984  720939 system_pods.go:89] "kube-apiserver-no-preload-198717" [67548db9-5432-4574-bf12-b20ce6cafead] Running
	I1101 11:59:52.344988  720939 system_pods.go:89] "kube-controller-manager-no-preload-198717" [7865ffaf-26b0-4526-98a2-15c997a72dec] Running
	I1101 11:59:52.344992  720939 system_pods.go:89] "kube-proxy-tlh2v" [ded2c625-39aa-414d-b063-d523a28dd850] Running
	I1101 11:59:52.344997  720939 system_pods.go:89] "kube-scheduler-no-preload-198717" [610d540b-b744-4f68-881f-a9d00d06983d] Running
	I1101 11:59:52.345002  720939 system_pods.go:89] "storage-provisioner" [7242eed2-7588-463b-9906-b5289039fe17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:59:52.345025  720939 retry.go:31] will retry after 369.110245ms: missing components: kube-dns
	I1101 11:59:52.719428  720939 system_pods.go:86] 8 kube-system pods found
	I1101 11:59:52.719462  720939 system_pods.go:89] "coredns-66bc5c9577-s7p9w" [487ba34d-2e32-4d07-bcf2-d5ed1a59340b] Running
	I1101 11:59:52.719473  720939 system_pods.go:89] "etcd-no-preload-198717" [254941a7-95dc-417f-97be-e3fce18cb3fa] Running
	I1101 11:59:52.719478  720939 system_pods.go:89] "kindnet-qnmmf" [f70495ad-543e-4581-98b7-9e82ba963087] Running
	I1101 11:59:52.719482  720939 system_pods.go:89] "kube-apiserver-no-preload-198717" [67548db9-5432-4574-bf12-b20ce6cafead] Running
	I1101 11:59:52.719488  720939 system_pods.go:89] "kube-controller-manager-no-preload-198717" [7865ffaf-26b0-4526-98a2-15c997a72dec] Running
	I1101 11:59:52.719495  720939 system_pods.go:89] "kube-proxy-tlh2v" [ded2c625-39aa-414d-b063-d523a28dd850] Running
	I1101 11:59:52.719511  720939 system_pods.go:89] "kube-scheduler-no-preload-198717" [610d540b-b744-4f68-881f-a9d00d06983d] Running
	I1101 11:59:52.719515  720939 system_pods.go:89] "storage-provisioner" [7242eed2-7588-463b-9906-b5289039fe17] Running
	I1101 11:59:52.719529  720939 system_pods.go:126] duration metric: took 1.343750463s to wait for k8s-apps to be running ...
	I1101 11:59:52.719544  720939 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:59:52.719617  720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:59:52.734932  720939 system_svc.go:56] duration metric: took 15.371653ms WaitForService to wait for kubelet
	I1101 11:59:52.734971  720939 kubeadm.go:587] duration metric: took 17.862095636s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:59:52.734991  720939 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:59:52.738059  720939 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:59:52.738105  720939 node_conditions.go:123] node cpu capacity is 2
	I1101 11:59:52.738126  720939 node_conditions.go:105] duration metric: took 3.127581ms to run NodePressure ...
	I1101 11:59:52.738141  720939 start.go:242] waiting for startup goroutines ...
	I1101 11:59:52.738151  720939 start.go:247] waiting for cluster config update ...
	I1101 11:59:52.738161  720939 start.go:256] writing updated cluster config ...
	I1101 11:59:52.738501  720939 ssh_runner.go:195] Run: rm -f paused
	I1101 11:59:52.744499  720939 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:59:52.748602  720939 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s7p9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:52.753285  720939 pod_ready.go:94] pod "coredns-66bc5c9577-s7p9w" is "Ready"
	I1101 11:59:52.753313  720939 pod_ready.go:86] duration metric: took 4.684286ms for pod "coredns-66bc5c9577-s7p9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:52.755658  720939 pod_ready.go:83] waiting for pod "etcd-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:52.760213  720939 pod_ready.go:94] pod "etcd-no-preload-198717" is "Ready"
	I1101 11:59:52.760257  720939 pod_ready.go:86] duration metric: took 4.573852ms for pod "etcd-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:52.762925  720939 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:52.768750  720939 pod_ready.go:94] pod "kube-apiserver-no-preload-198717" is "Ready"
	I1101 11:59:52.768778  720939 pod_ready.go:86] duration metric: took 5.824666ms for pod "kube-apiserver-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:52.771804  720939 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:53.148979  720939 pod_ready.go:94] pod "kube-controller-manager-no-preload-198717" is "Ready"
	I1101 11:59:53.149010  720939 pod_ready.go:86] duration metric: took 377.177454ms for pod "kube-controller-manager-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:53.349505  720939 pod_ready.go:83] waiting for pod "kube-proxy-tlh2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:53.748618  720939 pod_ready.go:94] pod "kube-proxy-tlh2v" is "Ready"
	I1101 11:59:53.748646  720939 pod_ready.go:86] duration metric: took 399.11309ms for pod "kube-proxy-tlh2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:53.949600  720939 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:54.349234  720939 pod_ready.go:94] pod "kube-scheduler-no-preload-198717" is "Ready"
	I1101 11:59:54.349265  720939 pod_ready.go:86] duration metric: took 399.636781ms for pod "kube-scheduler-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:59:54.349279  720939 pod_ready.go:40] duration metric: took 1.604746766s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:59:54.415594  720939 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 11:59:54.419039  720939 out.go:179] * Done! kubectl is now configured to use "no-preload-198717" cluster and "default" namespace by default
	W1101 11:59:53.082791  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	W1101 11:59:55.083527  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	W1101 11:59:57.582278  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	W1101 11:59:59.583111  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	W1101 12:00:01.587647  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 11:59:51 no-preload-198717 crio[843]: time="2025-11-01T11:59:51.671669253Z" level=info msg="Created container 847dd4c463671ef1e5abebdb47b2a0175950905d7475fd77c56cad8b8ffa2164: kube-system/coredns-66bc5c9577-s7p9w/coredns" id=f23c935e-f279-45a3-8323-e5829867b928 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:59:51 no-preload-198717 crio[843]: time="2025-11-01T11:59:51.672594828Z" level=info msg="Starting container: 847dd4c463671ef1e5abebdb47b2a0175950905d7475fd77c56cad8b8ffa2164" id=67e3a87d-99b5-4b07-b0b6-8ff09fa7604e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:59:51 no-preload-198717 crio[843]: time="2025-11-01T11:59:51.675442044Z" level=info msg="Started container" PID=2518 containerID=847dd4c463671ef1e5abebdb47b2a0175950905d7475fd77c56cad8b8ffa2164 description=kube-system/coredns-66bc5c9577-s7p9w/coredns id=67e3a87d-99b5-4b07-b0b6-8ff09fa7604e name=/runtime.v1.RuntimeService/StartContainer sandboxID=85c1fcdc68e97f33794fa65d9077d263dc9f1afb8eb77a80158215d5cd9e10a7
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.96238438Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0ec36044-1537-435a-b1a4-468f5a4c2038 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.962465793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.970538578Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e973118c6d297e02f916fc85edfbbe84c1a9343b79740fbb15faf953eaede331 UID:00673c7a-bc5a-4041-b86d-7c60acfabc54 NetNS:/var/run/netns/74283f99-f0cb-4aef-8199-ab6b6fb88725 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000cfeea0}] Aliases:map[]}"
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.97058081Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.980089561Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e973118c6d297e02f916fc85edfbbe84c1a9343b79740fbb15faf953eaede331 UID:00673c7a-bc5a-4041-b86d-7c60acfabc54 NetNS:/var/run/netns/74283f99-f0cb-4aef-8199-ab6b6fb88725 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000cfeea0}] Aliases:map[]}"
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.980239298Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.984039257Z" level=info msg="Ran pod sandbox e973118c6d297e02f916fc85edfbbe84c1a9343b79740fbb15faf953eaede331 with infra container: default/busybox/POD" id=0ec36044-1537-435a-b1a4-468f5a4c2038 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.987362664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=477a5917-822b-4b7e-88cd-c054961ce6a8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.98753892Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=477a5917-822b-4b7e-88cd-c054961ce6a8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.987589653Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=477a5917-822b-4b7e-88cd-c054961ce6a8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.988518904Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=40128b1f-e114-4b8f-b870-4cc8f7cc0df2 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:59:54 no-preload-198717 crio[843]: time="2025-11-01T11:59:54.991034678Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.182530491Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=40128b1f-e114-4b8f-b870-4cc8f7cc0df2 name=/runtime.v1.ImageService/PullImage
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.183141363Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=df44ff1d-5092-418d-bdb0-e9c3592e7da0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.192920968Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b544d48-d555-4cf9-9c99-71b874cc98be name=/runtime.v1.ImageService/ImageStatus
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.198997871Z" level=info msg="Creating container: default/busybox/busybox" id=2f6e5e3e-07d0-4cdc-a7f1-4624793a423a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.199131305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.204864917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.205403164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.220687415Z" level=info msg="Created container 8e894d25f36a92b38f1ccac09a30bac2d54dcb823cc4119f307519bbe406e171: default/busybox/busybox" id=2f6e5e3e-07d0-4cdc-a7f1-4624793a423a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.2264222Z" level=info msg="Starting container: 8e894d25f36a92b38f1ccac09a30bac2d54dcb823cc4119f307519bbe406e171" id=6f4cc6f9-22d9-47a5-b9a6-d977d68680d5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 11:59:57 no-preload-198717 crio[843]: time="2025-11-01T11:59:57.229184747Z" level=info msg="Started container" PID=2573 containerID=8e894d25f36a92b38f1ccac09a30bac2d54dcb823cc4119f307519bbe406e171 description=default/busybox/busybox id=6f4cc6f9-22d9-47a5-b9a6-d977d68680d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e973118c6d297e02f916fc85edfbbe84c1a9343b79740fbb15faf953eaede331
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8e894d25f36a9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   e973118c6d297       busybox                                     default
	847dd4c463671       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   85c1fcdc68e97       coredns-66bc5c9577-s7p9w                    kube-system
	37a77b638f67b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   8d1e40d2100e3       storage-provisioner                         kube-system
	6120b06b347fa       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   b81d10914f18d       kindnet-qnmmf                               kube-system
	6daa7cf1efcdb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   6adfaf463ce59       kube-proxy-tlh2v                            kube-system
	38257ccb781b2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      43 seconds ago      Running             etcd                      0                   8562bc3c7f281       etcd-no-preload-198717                      kube-system
	7422d54d68c52       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      43 seconds ago      Running             kube-apiserver            0                   36afb5c1a4e50       kube-apiserver-no-preload-198717            kube-system
	c06701332f9cf       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      43 seconds ago      Running             kube-scheduler            0                   eaf7ecd7c6eb0       kube-scheduler-no-preload-198717            kube-system
	7e08d65068f7d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      43 seconds ago      Running             kube-controller-manager   0                   7cf1da31e6335       kube-controller-manager-no-preload-198717   kube-system
	
	
	==> coredns [847dd4c463671ef1e5abebdb47b2a0175950905d7475fd77c56cad8b8ffa2164] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35604 - 11038 "HINFO IN 1418319993762644297.214022114847542327. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.03305627s
	
	
	==> describe nodes <==
	Name:               no-preload-198717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-198717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=no-preload-198717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_59_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:59:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-198717
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:00:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:00:01 +0000   Sat, 01 Nov 2025 11:59:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:00:01 +0000   Sat, 01 Nov 2025 11:59:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:00:01 +0000   Sat, 01 Nov 2025 11:59:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 12:00:01 +0000   Sat, 01 Nov 2025 11:59:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-198717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                f8c2bafd-3783-4a3a-8c96-56d9871a2cad
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-s7p9w                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-198717                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-qnmmf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-198717             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-198717    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-tlh2v                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-198717             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 27s   kube-proxy       
	  Normal   Starting                 34s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s   kubelet          Node no-preload-198717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s   kubelet          Node no-preload-198717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s   kubelet          Node no-preload-198717 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s   node-controller  Node no-preload-198717 event: Registered Node no-preload-198717 in Controller
	  Normal   NodeReady                13s   kubelet          Node no-preload-198717 status is now: NodeReady
	
	
	==> dmesg <==
	[ +35.784283] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [38257ccb781b2b1a356d207b6041140c3dccf9dc58e6fa89bb8a726c7c9720f1] <==
	{"level":"warn","ts":"2025-11-01T11:59:24.266173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.319244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.371990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.398384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.460131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.489160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.589878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.629898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.672924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.762805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.779176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.824023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.880218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:24.989899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.016516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.141028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.141625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.176912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.228113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.262385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.338311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.362661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.436227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.436625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:25.609295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38406","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:00:04 up  3:42,  0 user,  load average: 4.19, 3.53, 2.79
	Linux no-preload-198717 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6120b06b347fac6b933ec6015ca2f2f67ee75c02315b4b1e373db21e9d109395] <==
	I1101 11:59:40.731601       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 11:59:40.731892       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 11:59:40.732030       1 main.go:148] setting mtu 1500 for CNI 
	I1101 11:59:40.732042       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 11:59:40.732052       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T11:59:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 11:59:40.922823       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 11:59:40.922853       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 11:59:40.922863       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 11:59:40.923159       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 11:59:41.123030       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 11:59:41.123058       1 metrics.go:72] Registering metrics
	I1101 11:59:41.123135       1 controller.go:711] "Syncing nftables rules"
	I1101 11:59:50.929539       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 11:59:50.929597       1 main.go:301] handling current node
	I1101 12:00:00.924943       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:00:00.924975       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7422d54d68c524d4e272359a99cd2878b2a08fb375ead9ecfccbf513e000dcd8] <==
	I1101 11:59:27.306004       1 cache.go:39] Caches are synced for autoregister controller
	I1101 11:59:27.336337       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 11:59:27.340385       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:59:27.342727       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 11:59:27.456021       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 11:59:27.456755       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:59:27.456823       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 11:59:27.724998       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 11:59:27.753199       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 11:59:27.753222       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 11:59:28.986575       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:59:29.045670       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:59:29.191957       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 11:59:29.208215       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 11:59:29.210370       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 11:59:29.219050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 11:59:29.250509       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 11:59:30.025377       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 11:59:30.062104       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 11:59:30.086325       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 11:59:34.468812       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:59:34.490989       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:59:35.072774       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 11:59:35.511016       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 12:00:02.830174       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:56558: use of closed network connection
	
	
	==> kube-controller-manager [7e08d65068f7d4fc11cb24b3e0f3ac7731065b6019c90f181143d192c0bb8431] <==
	I1101 11:59:34.337971       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 11:59:34.338028       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 11:59:34.338310       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 11:59:34.339027       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 11:59:34.340047       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 11:59:34.340341       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 11:59:34.344945       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 11:59:34.345049       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 11:59:34.345110       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 11:59:34.345840       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 11:59:34.347744       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 11:59:34.350115       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 11:59:34.350224       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 11:59:34.350389       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 11:59:34.350438       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 11:59:34.350482       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 11:59:34.367591       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-198717" podCIDRs=["10.244.0.0/24"]
	I1101 11:59:34.367762       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 11:59:34.367828       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 11:59:34.369346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 11:59:34.387855       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:59:34.387958       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 11:59:34.387989       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 11:59:34.413835       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:59:54.297935       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6daa7cf1efcdbb04d5c64fdd9694c162977fc7ae68195422fd53423e8677b2b4] <==
	I1101 11:59:36.563439       1 server_linux.go:53] "Using iptables proxy"
	I1101 11:59:36.711918       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:59:36.812481       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:59:36.812519       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 11:59:36.812599       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:59:36.849464       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:59:36.849555       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:59:36.857951       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:59:36.860531       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:59:36.860560       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:59:36.870139       1 config.go:200] "Starting service config controller"
	I1101 11:59:36.870158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:59:36.870176       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:59:36.870180       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:59:36.870192       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:59:36.870196       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:59:36.879025       1 config.go:309] "Starting node config controller"
	I1101 11:59:36.879043       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:59:36.971275       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:59:36.971315       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:59:36.971361       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 11:59:36.979908       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [c06701332f9cf45bc8d569e69154aba25ea125d818407d3d96636406df0edc1a] <==
	I1101 11:59:27.381214       1 serving.go:386] Generated self-signed cert in-memory
	W1101 11:59:28.868312       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 11:59:28.868437       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 11:59:28.868471       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 11:59:28.868505       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 11:59:28.897824       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 11:59:28.898023       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:59:28.900336       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:59:28.900433       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1101 11:59:28.905935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 11:59:28.906023       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 11:59:28.906360       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 11:59:30.405493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 11:59:31 no-preload-198717 kubelet[2036]: E1101 11:59:31.636377    2036 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-no-preload-198717\" already exists" pod="kube-system/etcd-no-preload-198717"
	Nov 01 11:59:31 no-preload-198717 kubelet[2036]: E1101 11:59:31.676584    2036 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-no-preload-198717\" already exists" pod="kube-system/kube-controller-manager-no-preload-198717"
	Nov 01 11:59:34 no-preload-198717 kubelet[2036]: I1101 11:59:34.388826    2036 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 11:59:34 no-preload-198717 kubelet[2036]: I1101 11:59:34.392465    2036 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 11:59:35 no-preload-198717 kubelet[2036]: I1101 11:59:35.663118    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f70495ad-543e-4581-98b7-9e82ba963087-xtables-lock\") pod \"kindnet-qnmmf\" (UID: \"f70495ad-543e-4581-98b7-9e82ba963087\") " pod="kube-system/kindnet-qnmmf"
	Nov 01 11:59:35 no-preload-198717 kubelet[2036]: I1101 11:59:35.663166    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f70495ad-543e-4581-98b7-9e82ba963087-cni-cfg\") pod \"kindnet-qnmmf\" (UID: \"f70495ad-543e-4581-98b7-9e82ba963087\") " pod="kube-system/kindnet-qnmmf"
	Nov 01 11:59:35 no-preload-198717 kubelet[2036]: I1101 11:59:35.663187    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f70495ad-543e-4581-98b7-9e82ba963087-lib-modules\") pod \"kindnet-qnmmf\" (UID: \"f70495ad-543e-4581-98b7-9e82ba963087\") " pod="kube-system/kindnet-qnmmf"
	Nov 01 11:59:35 no-preload-198717 kubelet[2036]: I1101 11:59:35.663206    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdqfr\" (UniqueName: \"kubernetes.io/projected/f70495ad-543e-4581-98b7-9e82ba963087-kube-api-access-sdqfr\") pod \"kindnet-qnmmf\" (UID: \"f70495ad-543e-4581-98b7-9e82ba963087\") " pod="kube-system/kindnet-qnmmf"
	Nov 01 11:59:35 no-preload-198717 kubelet[2036]: I1101 11:59:35.865771    2036 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 11:59:35 no-preload-198717 kubelet[2036]: I1101 11:59:35.867242    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ded2c625-39aa-414d-b063-d523a28dd850-xtables-lock\") pod \"kube-proxy-tlh2v\" (UID: \"ded2c625-39aa-414d-b063-d523a28dd850\") " pod="kube-system/kube-proxy-tlh2v"
	Nov 01 11:59:35 no-preload-198717 kubelet[2036]: I1101 11:59:35.867350    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ded2c625-39aa-414d-b063-d523a28dd850-lib-modules\") pod \"kube-proxy-tlh2v\" (UID: \"ded2c625-39aa-414d-b063-d523a28dd850\") " pod="kube-system/kube-proxy-tlh2v"
	Nov 01 11:59:35 no-preload-198717 kubelet[2036]: I1101 11:59:35.867423    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm8qw\" (UniqueName: \"kubernetes.io/projected/ded2c625-39aa-414d-b063-d523a28dd850-kube-api-access-cm8qw\") pod \"kube-proxy-tlh2v\" (UID: \"ded2c625-39aa-414d-b063-d523a28dd850\") " pod="kube-system/kube-proxy-tlh2v"
	Nov 01 11:59:35 no-preload-198717 kubelet[2036]: I1101 11:59:35.867501    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ded2c625-39aa-414d-b063-d523a28dd850-kube-proxy\") pod \"kube-proxy-tlh2v\" (UID: \"ded2c625-39aa-414d-b063-d523a28dd850\") " pod="kube-system/kube-proxy-tlh2v"
	Nov 01 11:59:36 no-preload-198717 kubelet[2036]: W1101 11:59:36.088265    2036 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/crio-6adfaf463ce599973c1db7f30342c89d46db3562735faf1455a6b0a43e687d9f WatchSource:0}: Error finding container 6adfaf463ce599973c1db7f30342c89d46db3562735faf1455a6b0a43e687d9f: Status 404 returned error can't find the container with id 6adfaf463ce599973c1db7f30342c89d46db3562735faf1455a6b0a43e687d9f
	Nov 01 11:59:37 no-preload-198717 kubelet[2036]: I1101 11:59:37.924230    2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tlh2v" podStartSLOduration=2.924122427 podStartE2EDuration="2.924122427s" podCreationTimestamp="2025-11-01 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:59:36.593522036 +0000 UTC m=+6.643981586" watchObservedRunningTime="2025-11-01 11:59:37.924122427 +0000 UTC m=+7.974581985"
	Nov 01 11:59:40 no-preload-198717 kubelet[2036]: I1101 11:59:40.706704    2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qnmmf" podStartSLOduration=1.2253770099999999 podStartE2EDuration="5.70668448s" podCreationTimestamp="2025-11-01 11:59:35 +0000 UTC" firstStartedPulling="2025-11-01 11:59:35.967899743 +0000 UTC m=+6.018359293" lastFinishedPulling="2025-11-01 11:59:40.449207213 +0000 UTC m=+10.499666763" observedRunningTime="2025-11-01 11:59:40.644548835 +0000 UTC m=+10.695008393" watchObservedRunningTime="2025-11-01 11:59:40.70668448 +0000 UTC m=+10.757144030"
	Nov 01 11:59:51 no-preload-198717 kubelet[2036]: I1101 11:59:51.218928    2036 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 11:59:51 no-preload-198717 kubelet[2036]: I1101 11:59:51.335443    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487ba34d-2e32-4d07-bcf2-d5ed1a59340b-config-volume\") pod \"coredns-66bc5c9577-s7p9w\" (UID: \"487ba34d-2e32-4d07-bcf2-d5ed1a59340b\") " pod="kube-system/coredns-66bc5c9577-s7p9w"
	Nov 01 11:59:51 no-preload-198717 kubelet[2036]: I1101 11:59:51.335500    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mxd\" (UniqueName: \"kubernetes.io/projected/487ba34d-2e32-4d07-bcf2-d5ed1a59340b-kube-api-access-p6mxd\") pod \"coredns-66bc5c9577-s7p9w\" (UID: \"487ba34d-2e32-4d07-bcf2-d5ed1a59340b\") " pod="kube-system/coredns-66bc5c9577-s7p9w"
	Nov 01 11:59:51 no-preload-198717 kubelet[2036]: I1101 11:59:51.335526    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltdcj\" (UniqueName: \"kubernetes.io/projected/7242eed2-7588-463b-9906-b5289039fe17-kube-api-access-ltdcj\") pod \"storage-provisioner\" (UID: \"7242eed2-7588-463b-9906-b5289039fe17\") " pod="kube-system/storage-provisioner"
	Nov 01 11:59:51 no-preload-198717 kubelet[2036]: I1101 11:59:51.335550    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7242eed2-7588-463b-9906-b5289039fe17-tmp\") pod \"storage-provisioner\" (UID: \"7242eed2-7588-463b-9906-b5289039fe17\") " pod="kube-system/storage-provisioner"
	Nov 01 11:59:51 no-preload-198717 kubelet[2036]: W1101 11:59:51.606112    2036 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/crio-85c1fcdc68e97f33794fa65d9077d263dc9f1afb8eb77a80158215d5cd9e10a7 WatchSource:0}: Error finding container 85c1fcdc68e97f33794fa65d9077d263dc9f1afb8eb77a80158215d5cd9e10a7: Status 404 returned error can't find the container with id 85c1fcdc68e97f33794fa65d9077d263dc9f1afb8eb77a80158215d5cd9e10a7
	Nov 01 11:59:52 no-preload-198717 kubelet[2036]: I1101 11:59:52.661420    2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s7p9w" podStartSLOduration=17.661392369 podStartE2EDuration="17.661392369s" podCreationTimestamp="2025-11-01 11:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:59:52.639852022 +0000 UTC m=+22.690311588" watchObservedRunningTime="2025-11-01 11:59:52.661392369 +0000 UTC m=+22.711851919"
	Nov 01 11:59:52 no-preload-198717 kubelet[2036]: I1101 11:59:52.683979    2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.683962499 podStartE2EDuration="15.683962499s" podCreationTimestamp="2025-11-01 11:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:59:52.663239679 +0000 UTC m=+22.713699237" watchObservedRunningTime="2025-11-01 11:59:52.683962499 +0000 UTC m=+22.734422048"
	Nov 01 11:59:54 no-preload-198717 kubelet[2036]: I1101 11:59:54.755976    2036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsg5s\" (UniqueName: \"kubernetes.io/projected/00673c7a-bc5a-4041-b86d-7c60acfabc54-kube-api-access-rsg5s\") pod \"busybox\" (UID: \"00673c7a-bc5a-4041-b86d-7c60acfabc54\") " pod="default/busybox"
	
	
	==> storage-provisioner [37a77b638f67be085c1e554819caa199d60eac63a99a15cae147656f7f3abe88] <==
	I1101 11:59:51.649924       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 11:59:51.668581       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 11:59:51.668636       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 11:59:51.671686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:59:51.697805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 11:59:51.698054       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 11:59:51.699207       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-198717_8e77db67-a7b0-4f5d-8c7e-b9cc8db0bf15!
	I1101 11:59:51.700271       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a58c8693-3c87-4a71-8fd5-eb11efb6d780", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-198717_8e77db67-a7b0-4f5d-8c7e-b9cc8db0bf15 became leader
	W1101 11:59:51.708992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:59:51.717595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 11:59:51.799637       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-198717_8e77db67-a7b0-4f5d-8c7e-b9cc8db0bf15!
	W1101 11:59:53.721220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:59:53.728856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:59:55.735232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:59:55.740232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:59:57.743351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:59:57.749974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:59:59.753027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:59:59.758306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:01.775096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:01.784815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:03.787779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:03.794092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-198717 -n no-preload-198717
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-198717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (352.99687ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:00:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-816860 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-816860 describe deploy/metrics-server -n kube-system: exit status 1 (114.241475ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-816860 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-816860
helpers_test.go:243: (dbg) docker inspect embed-certs-816860:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a",
	        "Created": "2025-11-01T11:59:10.098758518Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 725084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:59:10.164557865Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/hosts",
	        "LogPath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a-json.log",
	        "Name": "/embed-certs-816860",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-816860:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-816860",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a",
	                "LowerDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-816860",
	                "Source": "/var/lib/docker/volumes/embed-certs-816860/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-816860",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-816860",
	                "name.minikube.sigs.k8s.io": "embed-certs-816860",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e5739ab374c5aa5af1eab26912de8a176a0b344854a84a57254c357a4303936d",
	            "SandboxKey": "/var/run/docker/netns/e5739ab374c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33790"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-816860": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:1b:17:0d:6f:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4c593e124071dc106f0bb655a4bbd20938473ea59778c717ee430f5236bedf71",
	                    "EndpointID": "5b147a9c21d603a8b67643452c718a403a0e9e73eaf96317943f78371267648a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-816860",
	                        "5efd8111d020"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-816860 -n embed-certs-816860
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-816860 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-816860 logs -n 25: (1.684231634s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-643844                                                                                                                                                                                                                  │ force-systemd-flag-643844 │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:54 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-534694    │ jenkins │ v1.37.0 │ 01 Nov 25 11:54 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p force-systemd-env-857548                                                                                                                                                                                                                   │ force-systemd-env-857548  │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p cert-options-505831 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ cert-options-505831 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ ssh     │ -p cert-options-505831 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p cert-options-505831                                                                                                                                                                                                                        │ cert-options-505831       │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │                     │
	│ stop    │ -p old-k8s-version-952358 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-952358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:58 UTC │
	│ image   │ old-k8s-version-952358 image list --format=json                                                                                                                                                                                               │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ pause   │ -p old-k8s-version-952358 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │                     │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-534694    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717         │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ delete  │ -p cert-expiration-534694                                                                                                                                                                                                                     │ cert-expiration-534694    │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860        │ jenkins │ v1.37.0 │ 01 Nov 25 11:59 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-198717         │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p no-preload-198717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-198717         │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-198717         │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717         │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-816860        │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:00:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:00:17.716320  728709 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:00:17.716488  728709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:00:17.716501  728709 out.go:374] Setting ErrFile to fd 2...
	I1101 12:00:17.716507  728709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:00:17.716746  728709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:00:17.717149  728709 out.go:368] Setting JSON to false
	I1101 12:00:17.718162  728709 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13367,"bootTime":1761985051,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:00:17.718235  728709 start.go:143] virtualization:  
	I1101 12:00:17.721300  728709 out.go:179] * [no-preload-198717] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:00:17.725183  728709 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:00:17.725312  728709 notify.go:221] Checking for updates...
	I1101 12:00:17.731325  728709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:00:17.734255  728709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:00:17.737222  728709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:00:17.740135  728709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:00:17.742959  728709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 12:00:17.746489  728709 config.go:182] Loaded profile config "no-preload-198717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:00:17.747127  728709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:00:17.772663  728709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:00:17.772788  728709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:00:17.842702  728709 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:00:17.825539006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:00:17.842815  728709 docker.go:319] overlay module found
	I1101 12:00:17.845918  728709 out.go:179] * Using the docker driver based on existing profile
	I1101 12:00:17.848798  728709 start.go:309] selected driver: docker
	I1101 12:00:17.848818  728709 start.go:930] validating driver "docker" against &{Name:no-preload-198717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-198717 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:00:17.848916  728709 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:00:17.849732  728709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:00:17.919265  728709 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:00:17.910079223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:00:17.919611  728709 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:00:17.919650  728709 cni.go:84] Creating CNI manager for ""
	I1101 12:00:17.919713  728709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:00:17.919761  728709 start.go:353] cluster config:
	{Name:no-preload-198717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-198717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:00:17.922977  728709 out.go:179] * Starting "no-preload-198717" primary control-plane node in "no-preload-198717" cluster
	I1101 12:00:17.925807  728709 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:00:17.928765  728709 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:00:17.931530  728709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:00:17.931624  728709 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:00:17.931664  728709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/config.json ...
	I1101 12:00:17.931970  728709 cache.go:107] acquiring lock: {Name:mk64a54301e5f301dd8f9b1fe386ce8a8d38b0d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:17.932056  728709 cache.go:115] /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 12:00:17.932073  728709 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.301µs
	I1101 12:00:17.932081  728709 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 12:00:17.932092  728709 cache.go:107] acquiring lock: {Name:mkfd1ee89ed3f86e66cf6849c648886e407ac84b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:17.932130  728709 cache.go:115] /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 12:00:17.932140  728709 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 48.337µs
	I1101 12:00:17.932146  728709 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 12:00:17.932160  728709 cache.go:107] acquiring lock: {Name:mk9329b3bdc5468d662007f350c48f9a3ba6116c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:17.932193  728709 cache.go:115] /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 12:00:17.932202  728709 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 43.824µs
	I1101 12:00:17.932208  728709 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 12:00:17.932217  728709 cache.go:107] acquiring lock: {Name:mkf914cc05f04e48184faa865ad2bdf1756e13cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:17.932248  728709 cache.go:115] /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 12:00:17.932256  728709 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.304µs
	I1101 12:00:17.932263  728709 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 12:00:17.932282  728709 cache.go:107] acquiring lock: {Name:mkfb3887591b2fdfbeb666e633f3c0d91406860a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:17.932314  728709 cache.go:115] /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 12:00:17.932323  728709 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 41.396µs
	I1101 12:00:17.932329  728709 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 12:00:17.932338  728709 cache.go:107] acquiring lock: {Name:mk73798de2f1e45e99156ca65e559a36d87bb634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:17.932368  728709 cache.go:115] /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1101 12:00:17.932377  728709 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.403µs
	I1101 12:00:17.932383  728709 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 12:00:17.932391  728709 cache.go:107] acquiring lock: {Name:mk9bdd925087b7f911424061a17832b0340fc227 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:17.932420  728709 cache.go:115] /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 12:00:17.932429  728709 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 38.466µs
	I1101 12:00:17.932435  728709 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 12:00:17.932443  728709 cache.go:107] acquiring lock: {Name:mkdd5435b47ffcd21fd69c586e6527f52e4bd9c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:17.932473  728709 cache.go:115] /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 12:00:17.932482  728709 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 40.107µs
	I1101 12:00:17.932488  728709 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 12:00:17.932494  728709 cache.go:87] Successfully saved all images to host disk.
	I1101 12:00:17.952187  728709 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:00:17.952212  728709 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:00:17.952235  728709 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:00:17.952259  728709 start.go:360] acquireMachinesLock for no-preload-198717: {Name:mkfdaff0495430325a114e7f53bcfc854eb0e8ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:17.952319  728709 start.go:364] duration metric: took 39.509µs to acquireMachinesLock for "no-preload-198717"
	I1101 12:00:17.952344  728709 start.go:96] Skipping create...Using existing machine configuration
	I1101 12:00:17.952353  728709 fix.go:54] fixHost starting: 
	I1101 12:00:17.952620  728709 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 12:00:17.970103  728709 fix.go:112] recreateIfNeeded on no-preload-198717: state=Stopped err=<nil>
	W1101 12:00:17.970134  728709 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 12:00:18.084006  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	W1101 12:00:20.582226  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	I1101 12:00:17.973406  728709 out.go:252] * Restarting existing docker container for "no-preload-198717" ...
	I1101 12:00:17.973495  728709 cli_runner.go:164] Run: docker start no-preload-198717
	I1101 12:00:18.246580  728709 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 12:00:18.270280  728709 kic.go:430] container "no-preload-198717" state is running.
	I1101 12:00:18.270709  728709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-198717
	I1101 12:00:18.302886  728709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/config.json ...
	I1101 12:00:18.303121  728709 machine.go:94] provisionDockerMachine start ...
	I1101 12:00:18.303179  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:18.326534  728709 main.go:143] libmachine: Using SSH client type: native
	I1101 12:00:18.326851  728709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33795 <nil> <nil>}
	I1101 12:00:18.326867  728709 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:00:18.327481  728709 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 12:00:21.481503  728709 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-198717
	
	I1101 12:00:21.481544  728709 ubuntu.go:182] provisioning hostname "no-preload-198717"
	I1101 12:00:21.481611  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:21.500189  728709 main.go:143] libmachine: Using SSH client type: native
	I1101 12:00:21.500504  728709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33795 <nil> <nil>}
	I1101 12:00:21.500521  728709 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-198717 && echo "no-preload-198717" | sudo tee /etc/hostname
	I1101 12:00:21.661064  728709 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-198717
	
	I1101 12:00:21.661164  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:21.679397  728709 main.go:143] libmachine: Using SSH client type: native
	I1101 12:00:21.679718  728709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33795 <nil> <nil>}
	I1101 12:00:21.679745  728709 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-198717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-198717/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-198717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:00:21.838376  728709 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:00:21.838414  728709 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:00:21.838439  728709 ubuntu.go:190] setting up certificates
	I1101 12:00:21.838453  728709 provision.go:84] configureAuth start
	I1101 12:00:21.838521  728709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-198717
	I1101 12:00:21.858403  728709 provision.go:143] copyHostCerts
	I1101 12:00:21.858481  728709 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:00:21.858502  728709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:00:21.858586  728709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:00:21.858693  728709 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:00:21.858706  728709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:00:21.858734  728709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:00:21.858796  728709 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:00:21.858805  728709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:00:21.858832  728709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:00:21.858886  728709 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.no-preload-198717 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-198717]
	I1101 12:00:22.074598  728709 provision.go:177] copyRemoteCerts
	I1101 12:00:22.074680  728709 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:00:22.074739  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:22.095065  728709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 12:00:22.202109  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 12:00:22.221508  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 12:00:22.239663  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:00:22.260943  728709 provision.go:87] duration metric: took 422.462931ms to configureAuth
	I1101 12:00:22.260972  728709 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:00:22.261184  728709 config.go:182] Loaded profile config "no-preload-198717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:00:22.261291  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:22.279195  728709 main.go:143] libmachine: Using SSH client type: native
	I1101 12:00:22.279514  728709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33795 <nil> <nil>}
	I1101 12:00:22.279528  728709 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:00:22.609567  728709 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:00:22.609626  728709 machine.go:97] duration metric: took 4.306495012s to provisionDockerMachine
	I1101 12:00:22.609653  728709 start.go:293] postStartSetup for "no-preload-198717" (driver="docker")
	I1101 12:00:22.609681  728709 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:00:22.609814  728709 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:00:22.609894  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:22.627661  728709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 12:00:22.737623  728709 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:00:22.740867  728709 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:00:22.740896  728709 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:00:22.740908  728709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:00:22.740963  728709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:00:22.741046  728709 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:00:22.741176  728709 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:00:22.748773  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:00:22.769007  728709 start.go:296] duration metric: took 159.322138ms for postStartSetup
	I1101 12:00:22.769153  728709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:00:22.769260  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:22.786962  728709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 12:00:22.886766  728709 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:00:22.891735  728709 fix.go:56] duration metric: took 4.93937373s for fixHost
	I1101 12:00:22.891761  728709 start.go:83] releasing machines lock for "no-preload-198717", held for 4.939426728s
	I1101 12:00:22.891834  728709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-198717
	I1101 12:00:22.909890  728709 ssh_runner.go:195] Run: cat /version.json
	I1101 12:00:22.909945  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:22.910206  728709 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:00:22.910317  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:22.933784  728709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 12:00:22.935461  728709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 12:00:23.037507  728709 ssh_runner.go:195] Run: systemctl --version
	I1101 12:00:23.151507  728709 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:00:23.189083  728709 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:00:23.193718  728709 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:00:23.193813  728709 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:00:23.203317  728709 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 12:00:23.203342  728709 start.go:496] detecting cgroup driver to use...
	I1101 12:00:23.203374  728709 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:00:23.203432  728709 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:00:23.219198  728709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:00:23.233579  728709 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:00:23.233645  728709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:00:23.249631  728709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:00:23.262965  728709 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:00:23.393434  728709 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:00:23.515141  728709 docker.go:234] disabling docker service ...
	I1101 12:00:23.515212  728709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:00:23.531603  728709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:00:23.545379  728709 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:00:23.677773  728709 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:00:23.816353  728709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:00:23.830504  728709 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:00:23.845094  728709 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:00:23.845193  728709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:00:23.855018  728709 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:00:23.855087  728709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:00:23.864300  728709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:00:23.873596  728709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:00:23.883146  728709 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:00:23.891254  728709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:00:23.901254  728709 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:00:23.910870  728709 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:00:23.920288  728709 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:00:23.929271  728709 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:00:23.937074  728709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:00:24.060906  728709 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:00:24.198381  728709 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:00:24.198472  728709 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:00:24.204118  728709 start.go:564] Will wait 60s for crictl version
	I1101 12:00:24.204198  728709 ssh_runner.go:195] Run: which crictl
	I1101 12:00:24.208290  728709 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:00:24.233186  728709 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:00:24.233288  728709 ssh_runner.go:195] Run: crio --version
	I1101 12:00:24.267220  728709 ssh_runner.go:195] Run: crio --version
	I1101 12:00:24.305468  728709 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:00:24.308393  728709 cli_runner.go:164] Run: docker network inspect no-preload-198717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:00:24.324608  728709 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 12:00:24.328689  728709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:00:24.338511  728709 kubeadm.go:884] updating cluster {Name:no-preload-198717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-198717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:00:24.338622  728709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:00:24.338671  728709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:00:24.375775  728709 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:00:24.375806  728709 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:00:24.375815  728709 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 12:00:24.375908  728709 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-198717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-198717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:00:24.375996  728709 ssh_runner.go:195] Run: crio config
	I1101 12:00:24.459519  728709 cni.go:84] Creating CNI manager for ""
	I1101 12:00:24.459539  728709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:00:24.459561  728709 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 12:00:24.459585  728709 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-198717 NodeName:no-preload-198717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:00:24.459709  728709 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-198717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:00:24.459781  728709 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:00:24.468647  728709 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:00:24.468718  728709 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:00:24.476545  728709 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 12:00:24.490103  728709 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:00:24.503793  728709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 12:00:24.518604  728709 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:00:24.522427  728709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:00:24.532931  728709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:00:24.657403  728709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:00:24.675277  728709 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717 for IP: 192.168.85.2
	I1101 12:00:24.675300  728709 certs.go:195] generating shared ca certs ...
	I1101 12:00:24.675317  728709 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:00:24.675462  728709 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:00:24.675518  728709 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:00:24.675532  728709 certs.go:257] generating profile certs ...
	I1101 12:00:24.675616  728709 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.key
	I1101 12:00:24.675701  728709 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.key.5fa2dae3
	I1101 12:00:24.675752  728709 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.key
	I1101 12:00:24.675880  728709 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:00:24.675914  728709 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:00:24.675928  728709 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:00:24.675951  728709 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:00:24.675976  728709 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:00:24.676003  728709 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:00:24.676053  728709 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:00:24.676727  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:00:24.709178  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:00:24.730901  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:00:24.750991  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:00:24.771942  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 12:00:24.808004  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:00:24.840408  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:00:24.872500  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 12:00:24.897869  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:00:24.924674  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:00:24.945384  728709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:00:24.966192  728709 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:00:24.979955  728709 ssh_runner.go:195] Run: openssl version
	I1101 12:00:24.988380  728709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:00:24.998176  728709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:00:25.003124  728709 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:00:25.003250  728709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:00:25.051425  728709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:00:25.060839  728709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:00:25.071789  728709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:00:25.076442  728709 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:00:25.076513  728709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:00:25.130086  728709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:00:25.139224  728709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:00:25.149750  728709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:00:25.154420  728709 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:00:25.154504  728709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:00:25.197552  728709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:00:25.206540  728709 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:00:25.210764  728709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 12:00:25.252959  728709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 12:00:25.300770  728709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 12:00:25.347626  728709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 12:00:25.401666  728709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 12:00:25.469437  728709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 12:00:25.564728  728709 kubeadm.go:401] StartCluster: {Name:no-preload-198717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-198717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:00:25.564816  728709 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:00:25.564884  728709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:00:25.633353  728709 cri.go:89] found id: "4146bebcfc78fff7e205d15a351a3b9489d9f1d7f2ce428d242490a4a9a214da"
	I1101 12:00:25.633376  728709 cri.go:89] found id: "f3772f41e725d1af7e862ae449d7118696e53f3be37b8779faa9d26f954875a8"
	I1101 12:00:25.633381  728709 cri.go:89] found id: "9d24638b6e39f00dc4f5ad46eade0ee4467aa0d861d222443a6b43a6ccaaf579"
	I1101 12:00:25.633384  728709 cri.go:89] found id: "41bc0ffb4ace7b78b5269921a034d897960eed08f17125d3ab8c8df9c3a224fd"
	I1101 12:00:25.633387  728709 cri.go:89] found id: ""
	I1101 12:00:25.633445  728709 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 12:00:25.648105  728709 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:00:25Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:00:25.648203  728709 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:00:25.663595  728709 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 12:00:25.663612  728709 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 12:00:25.663670  728709 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 12:00:25.676158  728709 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 12:00:25.677003  728709 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-198717" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:00:25.677518  728709 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-198717" cluster setting kubeconfig missing "no-preload-198717" context setting]
	I1101 12:00:25.678379  728709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:00:25.680075  728709 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 12:00:25.692988  728709 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 12:00:25.693019  728709 kubeadm.go:602] duration metric: took 29.400831ms to restartPrimaryControlPlane
	I1101 12:00:25.693029  728709 kubeadm.go:403] duration metric: took 128.311745ms to StartCluster
	I1101 12:00:25.693043  728709 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:00:25.693111  728709 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:00:25.694585  728709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:00:25.694799  728709 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:00:25.695297  728709 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:00:25.695371  728709 addons.go:70] Setting storage-provisioner=true in profile "no-preload-198717"
	I1101 12:00:25.695385  728709 addons.go:239] Setting addon storage-provisioner=true in "no-preload-198717"
	W1101 12:00:25.695391  728709 addons.go:248] addon storage-provisioner should already be in state true
	I1101 12:00:25.695415  728709 host.go:66] Checking if "no-preload-198717" exists ...
	I1101 12:00:25.695898  728709 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 12:00:25.696189  728709 config.go:182] Loaded profile config "no-preload-198717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:00:25.696268  728709 addons.go:70] Setting dashboard=true in profile "no-preload-198717"
	I1101 12:00:25.696306  728709 addons.go:239] Setting addon dashboard=true in "no-preload-198717"
	W1101 12:00:25.696331  728709 addons.go:248] addon dashboard should already be in state true
	I1101 12:00:25.696387  728709 host.go:66] Checking if "no-preload-198717" exists ...
	I1101 12:00:25.696881  728709 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 12:00:25.700819  728709 addons.go:70] Setting default-storageclass=true in profile "no-preload-198717"
	I1101 12:00:25.700862  728709 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-198717"
	I1101 12:00:25.701218  728709 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 12:00:25.701625  728709 out.go:179] * Verifying Kubernetes components...
	I1101 12:00:25.707925  728709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:00:25.757026  728709 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 12:00:25.757025  728709 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 12:00:25.760692  728709 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:00:25.760718  728709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:00:25.760789  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:25.771754  728709 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1101 12:00:22.583086  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	W1101 12:00:25.083131  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	I1101 12:00:25.777742  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 12:00:25.777774  728709 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 12:00:25.777855  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:25.781676  728709 addons.go:239] Setting addon default-storageclass=true in "no-preload-198717"
	W1101 12:00:25.781721  728709 addons.go:248] addon default-storageclass should already be in state true
	I1101 12:00:25.781746  728709 host.go:66] Checking if "no-preload-198717" exists ...
	I1101 12:00:25.782144  728709 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 12:00:25.835998  728709 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:00:25.836020  728709 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:00:25.836095  728709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:00:25.837452  728709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 12:00:25.838189  728709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 12:00:25.869388  728709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 12:00:26.071384  728709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:00:26.094817  728709 node_ready.go:35] waiting up to 6m0s for node "no-preload-198717" to be "Ready" ...
	I1101 12:00:26.115590  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 12:00:26.115665  728709 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 12:00:26.124930  728709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:00:26.138347  728709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:00:26.159375  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 12:00:26.159448  728709 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 12:00:26.215530  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 12:00:26.215611  728709 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 12:00:26.269561  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 12:00:26.269633  728709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 12:00:26.360810  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 12:00:26.360907  728709 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 12:00:26.385351  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 12:00:26.385425  728709 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 12:00:26.406372  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 12:00:26.406449  728709 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 12:00:26.432904  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 12:00:26.432982  728709 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 12:00:26.457847  728709 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:00:26.457909  728709 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 12:00:26.474788  728709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 12:00:27.083340  724423 node_ready.go:57] node "embed-certs-816860" has "Ready":"False" status (will retry)
	I1101 12:00:27.582814  724423 node_ready.go:49] node "embed-certs-816860" is "Ready"
	I1101 12:00:27.582846  724423 node_ready.go:38] duration metric: took 41.50326202s for node "embed-certs-816860" to be "Ready" ...
	I1101 12:00:27.582861  724423 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:00:27.582922  724423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:00:27.603255  724423 api_server.go:72] duration metric: took 42.490166022s to wait for apiserver process to appear ...
	I1101 12:00:27.603280  724423 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:00:27.603300  724423 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 12:00:27.614152  724423 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 12:00:27.618939  724423 api_server.go:141] control plane version: v1.34.1
	I1101 12:00:27.618971  724423 api_server.go:131] duration metric: took 15.683608ms to wait for apiserver health ...
	I1101 12:00:27.618980  724423 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:00:27.624111  724423 system_pods.go:59] 8 kube-system pods found
	I1101 12:00:27.624152  724423 system_pods.go:61] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:00:27.624161  724423 system_pods.go:61] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running
	I1101 12:00:27.624167  724423 system_pods.go:61] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:00:27.624172  724423 system_pods.go:61] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running
	I1101 12:00:27.624206  724423 system_pods.go:61] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running
	I1101 12:00:27.624219  724423 system_pods.go:61] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:00:27.624224  724423 system_pods.go:61] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running
	I1101 12:00:27.624230  724423 system_pods.go:61] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:00:27.624241  724423 system_pods.go:74] duration metric: took 5.255157ms to wait for pod list to return data ...
	I1101 12:00:27.624252  724423 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:00:27.630104  724423 default_sa.go:45] found service account: "default"
	I1101 12:00:27.630130  724423 default_sa.go:55] duration metric: took 5.8673ms for default service account to be created ...
	I1101 12:00:27.630140  724423 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 12:00:27.634270  724423 system_pods.go:86] 8 kube-system pods found
	I1101 12:00:27.634302  724423 system_pods.go:89] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:00:27.634309  724423 system_pods.go:89] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running
	I1101 12:00:27.634315  724423 system_pods.go:89] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:00:27.634348  724423 system_pods.go:89] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running
	I1101 12:00:27.634360  724423 system_pods.go:89] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running
	I1101 12:00:27.634365  724423 system_pods.go:89] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:00:27.634370  724423 system_pods.go:89] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running
	I1101 12:00:27.634384  724423 system_pods.go:89] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:00:27.634426  724423 retry.go:31] will retry after 271.349804ms: missing components: kube-dns
	I1101 12:00:27.909887  724423 system_pods.go:86] 8 kube-system pods found
	I1101 12:00:27.909922  724423 system_pods.go:89] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:00:27.909929  724423 system_pods.go:89] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running
	I1101 12:00:27.909965  724423 system_pods.go:89] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:00:27.909977  724423 system_pods.go:89] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running
	I1101 12:00:27.909983  724423 system_pods.go:89] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running
	I1101 12:00:27.909994  724423 system_pods.go:89] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:00:27.909998  724423 system_pods.go:89] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running
	I1101 12:00:27.910003  724423 system_pods.go:89] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:00:27.910036  724423 retry.go:31] will retry after 340.861075ms: missing components: kube-dns
	I1101 12:00:28.267492  724423 system_pods.go:86] 8 kube-system pods found
	I1101 12:00:28.267536  724423 system_pods.go:89] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:00:28.267562  724423 system_pods.go:89] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running
	I1101 12:00:28.267575  724423 system_pods.go:89] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:00:28.267580  724423 system_pods.go:89] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running
	I1101 12:00:28.267591  724423 system_pods.go:89] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running
	I1101 12:00:28.267596  724423 system_pods.go:89] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:00:28.267622  724423 system_pods.go:89] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running
	I1101 12:00:28.267641  724423 system_pods.go:89] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:00:28.267664  724423 retry.go:31] will retry after 482.870459ms: missing components: kube-dns
	I1101 12:00:28.754577  724423 system_pods.go:86] 8 kube-system pods found
	I1101 12:00:28.754613  724423 system_pods.go:89] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:00:28.754620  724423 system_pods.go:89] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running
	I1101 12:00:28.754626  724423 system_pods.go:89] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:00:28.754657  724423 system_pods.go:89] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running
	I1101 12:00:28.754669  724423 system_pods.go:89] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running
	I1101 12:00:28.754674  724423 system_pods.go:89] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:00:28.754679  724423 system_pods.go:89] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running
	I1101 12:00:28.754687  724423 system_pods.go:89] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:00:28.754706  724423 retry.go:31] will retry after 383.264082ms: missing components: kube-dns
	I1101 12:00:29.141796  724423 system_pods.go:86] 8 kube-system pods found
	I1101 12:00:29.141828  724423 system_pods.go:89] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Running
	I1101 12:00:29.141845  724423 system_pods.go:89] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running
	I1101 12:00:29.141869  724423 system_pods.go:89] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:00:29.141880  724423 system_pods.go:89] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running
	I1101 12:00:29.141886  724423 system_pods.go:89] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running
	I1101 12:00:29.141891  724423 system_pods.go:89] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:00:29.141901  724423 system_pods.go:89] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running
	I1101 12:00:29.141905  724423 system_pods.go:89] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Running
	I1101 12:00:29.141930  724423 system_pods.go:126] duration metric: took 1.511782825s to wait for k8s-apps to be running ...
	I1101 12:00:29.141945  724423 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 12:00:29.142030  724423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:00:29.162202  724423 system_svc.go:56] duration metric: took 20.248401ms WaitForService to wait for kubelet
	I1101 12:00:29.162239  724423 kubeadm.go:587] duration metric: took 44.049155683s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:00:29.162274  724423 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:00:29.165061  724423 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:00:29.165091  724423 node_conditions.go:123] node cpu capacity is 2
	I1101 12:00:29.165135  724423 node_conditions.go:105] duration metric: took 2.825989ms to run NodePressure ...
	I1101 12:00:29.165153  724423 start.go:242] waiting for startup goroutines ...
	I1101 12:00:29.165162  724423 start.go:247] waiting for cluster config update ...
	I1101 12:00:29.165190  724423 start.go:256] writing updated cluster config ...
	I1101 12:00:29.165538  724423 ssh_runner.go:195] Run: rm -f paused
	I1101 12:00:29.169287  724423 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:00:29.173217  724423 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4d2b7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:29.178427  724423 pod_ready.go:94] pod "coredns-66bc5c9577-4d2b7" is "Ready"
	I1101 12:00:29.178456  724423 pod_ready.go:86] duration metric: took 5.192067ms for pod "coredns-66bc5c9577-4d2b7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:29.181842  724423 pod_ready.go:83] waiting for pod "etcd-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:29.186848  724423 pod_ready.go:94] pod "etcd-embed-certs-816860" is "Ready"
	I1101 12:00:29.186922  724423 pod_ready.go:86] duration metric: took 5.005332ms for pod "etcd-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:29.189439  724423 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:29.195006  724423 pod_ready.go:94] pod "kube-apiserver-embed-certs-816860" is "Ready"
	I1101 12:00:29.195079  724423 pod_ready.go:86] duration metric: took 5.573815ms for pod "kube-apiserver-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:29.202372  724423 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:29.574130  724423 pod_ready.go:94] pod "kube-controller-manager-embed-certs-816860" is "Ready"
	I1101 12:00:29.574211  724423 pod_ready.go:86] duration metric: took 371.772168ms for pod "kube-controller-manager-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:29.774654  724423 pod_ready.go:83] waiting for pod "kube-proxy-q5757" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:30.174092  724423 pod_ready.go:94] pod "kube-proxy-q5757" is "Ready"
	I1101 12:00:30.174179  724423 pod_ready.go:86] duration metric: took 399.45017ms for pod "kube-proxy-q5757" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:30.375077  724423 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:30.773547  724423 pod_ready.go:94] pod "kube-scheduler-embed-certs-816860" is "Ready"
	I1101 12:00:30.773626  724423 pod_ready.go:86] duration metric: took 398.525785ms for pod "kube-scheduler-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:00:30.773654  724423 pod_ready.go:40] duration metric: took 1.604329325s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:00:30.887934  724423 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:00:30.893630  724423 out.go:179] * Done! kubectl is now configured to use "embed-certs-816860" cluster and "default" namespace by default
	I1101 12:00:31.387584  728709 node_ready.go:49] node "no-preload-198717" is "Ready"
	I1101 12:00:31.387617  728709 node_ready.go:38] duration metric: took 5.292711555s for node "no-preload-198717" to be "Ready" ...
	I1101 12:00:31.387632  728709 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:00:31.387689  728709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:00:31.701122  728709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.576095464s)
	I1101 12:00:32.948592  728709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.810162064s)
	I1101 12:00:33.326667  728709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.851784124s)
	I1101 12:00:33.326899  728709 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.939192405s)
	I1101 12:00:33.326930  728709 api_server.go:72] duration metric: took 7.632108864s to wait for apiserver process to appear ...
	I1101 12:00:33.326940  728709 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:00:33.326966  728709 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 12:00:33.329931  728709 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-198717 addons enable metrics-server
	
	I1101 12:00:33.332850  728709 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1101 12:00:33.335700  728709 addons.go:515] duration metric: took 7.640394067s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1101 12:00:33.336599  728709 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 12:00:33.336621  728709 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 12:00:33.827956  728709 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 12:00:33.842167  728709 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 12:00:33.846416  728709 api_server.go:141] control plane version: v1.34.1
	I1101 12:00:33.846494  728709 api_server.go:131] duration metric: took 519.546236ms to wait for apiserver health ...
	I1101 12:00:33.846506  728709 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:00:33.852423  728709 system_pods.go:59] 8 kube-system pods found
	I1101 12:00:33.852458  728709 system_pods.go:61] "coredns-66bc5c9577-s7p9w" [487ba34d-2e32-4d07-bcf2-d5ed1a59340b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:00:33.852467  728709 system_pods.go:61] "etcd-no-preload-198717" [254941a7-95dc-417f-97be-e3fce18cb3fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:00:33.852473  728709 system_pods.go:61] "kindnet-qnmmf" [f70495ad-543e-4581-98b7-9e82ba963087] Running
	I1101 12:00:33.852481  728709 system_pods.go:61] "kube-apiserver-no-preload-198717" [67548db9-5432-4574-bf12-b20ce6cafead] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:00:33.852487  728709 system_pods.go:61] "kube-controller-manager-no-preload-198717" [7865ffaf-26b0-4526-98a2-15c997a72dec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:00:33.852492  728709 system_pods.go:61] "kube-proxy-tlh2v" [ded2c625-39aa-414d-b063-d523a28dd850] Running
	I1101 12:00:33.852500  728709 system_pods.go:61] "kube-scheduler-no-preload-198717" [610d540b-b744-4f68-881f-a9d00d06983d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:00:33.852504  728709 system_pods.go:61] "storage-provisioner" [7242eed2-7588-463b-9906-b5289039fe17] Running
	I1101 12:00:33.852510  728709 system_pods.go:74] duration metric: took 5.997829ms to wait for pod list to return data ...
	I1101 12:00:33.852517  728709 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:00:33.856713  728709 default_sa.go:45] found service account: "default"
	I1101 12:00:33.856735  728709 default_sa.go:55] duration metric: took 4.212419ms for default service account to be created ...
	I1101 12:00:33.856745  728709 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 12:00:33.861041  728709 system_pods.go:86] 8 kube-system pods found
	I1101 12:00:33.861135  728709 system_pods.go:89] "coredns-66bc5c9577-s7p9w" [487ba34d-2e32-4d07-bcf2-d5ed1a59340b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:00:33.861168  728709 system_pods.go:89] "etcd-no-preload-198717" [254941a7-95dc-417f-97be-e3fce18cb3fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:00:33.861188  728709 system_pods.go:89] "kindnet-qnmmf" [f70495ad-543e-4581-98b7-9e82ba963087] Running
	I1101 12:00:33.861213  728709 system_pods.go:89] "kube-apiserver-no-preload-198717" [67548db9-5432-4574-bf12-b20ce6cafead] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:00:33.861241  728709 system_pods.go:89] "kube-controller-manager-no-preload-198717" [7865ffaf-26b0-4526-98a2-15c997a72dec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:00:33.861268  728709 system_pods.go:89] "kube-proxy-tlh2v" [ded2c625-39aa-414d-b063-d523a28dd850] Running
	I1101 12:00:33.861290  728709 system_pods.go:89] "kube-scheduler-no-preload-198717" [610d540b-b744-4f68-881f-a9d00d06983d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:00:33.861312  728709 system_pods.go:89] "storage-provisioner" [7242eed2-7588-463b-9906-b5289039fe17] Running
	I1101 12:00:33.861347  728709 system_pods.go:126] duration metric: took 4.595399ms to wait for k8s-apps to be running ...
	I1101 12:00:33.861370  728709 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 12:00:33.861443  728709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:00:33.886989  728709 system_svc.go:56] duration metric: took 25.609849ms WaitForService to wait for kubelet
	I1101 12:00:33.887057  728709 kubeadm.go:587] duration metric: took 8.192234652s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:00:33.887095  728709 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:00:33.890633  728709 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:00:33.890711  728709 node_conditions.go:123] node cpu capacity is 2
	I1101 12:00:33.890739  728709 node_conditions.go:105] duration metric: took 3.61996ms to run NodePressure ...
	I1101 12:00:33.890771  728709 start.go:242] waiting for startup goroutines ...
	I1101 12:00:33.890796  728709 start.go:247] waiting for cluster config update ...
	I1101 12:00:33.890822  728709 start.go:256] writing updated cluster config ...
	I1101 12:00:33.891141  728709 ssh_runner.go:195] Run: rm -f paused
	I1101 12:00:33.895393  728709 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:00:33.900437  728709 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s7p9w" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 12:00:35.906504  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 12:00:28 embed-certs-816860 crio[846]: time="2025-11-01T12:00:28.216468465Z" level=info msg="Created container acffede8cca3eec740d76c31128e7fc8fe8b038fab158608035be889c307e737: kube-system/coredns-66bc5c9577-4d2b7/coredns" id=a4888ad8-85c8-4e5d-b762-141773619733 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:00:28 embed-certs-816860 crio[846]: time="2025-11-01T12:00:28.217403747Z" level=info msg="Starting container: acffede8cca3eec740d76c31128e7fc8fe8b038fab158608035be889c307e737" id=9b8a39ae-aa34-40fb-a5ea-c6885106e5af name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:00:28 embed-certs-816860 crio[846]: time="2025-11-01T12:00:28.229366592Z" level=info msg="Started container" PID=1737 containerID=acffede8cca3eec740d76c31128e7fc8fe8b038fab158608035be889c307e737 description=kube-system/coredns-66bc5c9577-4d2b7/coredns id=9b8a39ae-aa34-40fb-a5ea-c6885106e5af name=/runtime.v1.RuntimeService/StartContainer sandboxID=efbc1b6cf6bdb99b9a8e0d6c27c866cc3002f774628dacfc73b030549e8183eb
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.512859142Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b7da6f62-0ee3-4060-a01c-07be523a8afe name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.512932718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.5291475Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:34c43a43a5707028d1e72485eec646b9cc4edd4182c8ad3de272cfa78a8f48b0 UID:e41d4e23-ef87-4bf1-a0d7-6261913ab0ec NetNS:/var/run/netns/dfd84325-d3c8-44c0-a3dd-a127c6423bdf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000deb68}] Aliases:map[]}"
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.529425707Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.556118999Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:34c43a43a5707028d1e72485eec646b9cc4edd4182c8ad3de272cfa78a8f48b0 UID:e41d4e23-ef87-4bf1-a0d7-6261913ab0ec NetNS:/var/run/netns/dfd84325-d3c8-44c0-a3dd-a127c6423bdf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000deb68}] Aliases:map[]}"
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.556420705Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.564875379Z" level=info msg="Ran pod sandbox 34c43a43a5707028d1e72485eec646b9cc4edd4182c8ad3de272cfa78a8f48b0 with infra container: default/busybox/POD" id=b7da6f62-0ee3-4060-a01c-07be523a8afe name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.566190981Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c93ac21f-0e94-4fdd-af52-6f8bb0d5bd2b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.566440978Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c93ac21f-0e94-4fdd-af52-6f8bb0d5bd2b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.566547704Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c93ac21f-0e94-4fdd-af52-6f8bb0d5bd2b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.575110506Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=69f36f6e-85fc-408f-a9a7-fe77c4c7535a name=/runtime.v1.ImageService/PullImage
	Nov 01 12:00:31 embed-certs-816860 crio[846]: time="2025-11-01T12:00:31.582145772Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 12:00:33 embed-certs-816860 crio[846]: time="2025-11-01T12:00:33.995920296Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=69f36f6e-85fc-408f-a9a7-fe77c4c7535a name=/runtime.v1.ImageService/PullImage
	Nov 01 12:00:33 embed-certs-816860 crio[846]: time="2025-11-01T12:00:33.997364491Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6ce1dbfc-25d6-4646-984c-13aaa9b42197 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:00:34 embed-certs-816860 crio[846]: time="2025-11-01T12:00:34.001677638Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=12f58841-27a1-43a7-9a9f-0fe3327b0d2d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:00:34 embed-certs-816860 crio[846]: time="2025-11-01T12:00:34.010467191Z" level=info msg="Creating container: default/busybox/busybox" id=6daf4350-bb1f-479a-8388-1a8c3688ee46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:00:34 embed-certs-816860 crio[846]: time="2025-11-01T12:00:34.010728077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:00:34 embed-certs-816860 crio[846]: time="2025-11-01T12:00:34.01932786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:00:34 embed-certs-816860 crio[846]: time="2025-11-01T12:00:34.020005957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:00:34 embed-certs-816860 crio[846]: time="2025-11-01T12:00:34.042360797Z" level=info msg="Created container 45b275e5fe9d8caa963b8045b73600331b457f41e7f06bde6b1f8439d56c91cd: default/busybox/busybox" id=6daf4350-bb1f-479a-8388-1a8c3688ee46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:00:34 embed-certs-816860 crio[846]: time="2025-11-01T12:00:34.047589402Z" level=info msg="Starting container: 45b275e5fe9d8caa963b8045b73600331b457f41e7f06bde6b1f8439d56c91cd" id=ca9c62a3-3618-4fa4-886a-5c24a070d333 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:00:34 embed-certs-816860 crio[846]: time="2025-11-01T12:00:34.053315449Z" level=info msg="Started container" PID=1791 containerID=45b275e5fe9d8caa963b8045b73600331b457f41e7f06bde6b1f8439d56c91cd description=default/busybox/busybox id=ca9c62a3-3618-4fa4-886a-5c24a070d333 name=/runtime.v1.RuntimeService/StartContainer sandboxID=34c43a43a5707028d1e72485eec646b9cc4edd4182c8ad3de272cfa78a8f48b0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	45b275e5fe9d8       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   34c43a43a5707       busybox                                      default
	acffede8cca3e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago       Running             coredns                   0                   efbc1b6cf6bdb       coredns-66bc5c9577-4d2b7                     kube-system
	bd01860a612b2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago       Running             storage-provisioner       0                   491fb727f1660       storage-provisioner                          kube-system
	b8547bc6fe247       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      56 seconds ago       Running             kindnet-cni               0                   72509e563b5ea       kindnet-zmkct                                kube-system
	123d27618ddba       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      56 seconds ago       Running             kube-proxy                0                   6ebfb24f63ae1       kube-proxy-q5757                             kube-system
	eb3ab03ad521e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   0b09e4c9adfe2       kube-apiserver-embed-certs-816860            kube-system
	785bdd8eaea4d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   b56b2b8b4ca50       kube-scheduler-embed-certs-816860            kube-system
	29b84653941db       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   e44d31b04476b       kube-controller-manager-embed-certs-816860   kube-system
	991dd7f1d0c0e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   b7338788dd46a       etcd-embed-certs-816860                      kube-system
	
	
	==> coredns [acffede8cca3eec740d76c31128e7fc8fe8b038fab158608035be889c307e737] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55029 - 34428 "HINFO IN 987561309323512995.5511732650652597841. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022262441s
	
	
	==> describe nodes <==
	Name:               embed-certs-816860
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-816860
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=embed-certs-816860
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_59_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-816860
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:00:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:00:40 +0000   Sat, 01 Nov 2025 11:59:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:00:40 +0000   Sat, 01 Nov 2025 11:59:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:00:40 +0000   Sat, 01 Nov 2025 11:59:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 12:00:40 +0000   Sat, 01 Nov 2025 12:00:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-816860
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                2c9a5c97-2e6e-4e74-beca-17c7b3951a1d
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-4d2b7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     58s
	  kube-system                 etcd-embed-certs-816860                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         65s
	  kube-system                 kindnet-zmkct                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-embed-certs-816860             250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-embed-certs-816860    200m (10%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-q5757                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-embed-certs-816860             100m (5%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Normal   Starting                 75s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 75s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node embed-certs-816860 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s (x8 over 75s)  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientPID
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s                kubelet          Node embed-certs-816860 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s                kubelet          Node embed-certs-816860 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s                kubelet          Node embed-certs-816860 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node embed-certs-816860 event: Registered Node embed-certs-816860 in Controller
	  Normal   NodeReady                16s                kubelet          Node embed-certs-816860 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 11:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [991dd7f1d0c0e5158e4a89f2a853d76e3a78fd3c86943ed1e698ae3d922f0d60] <==
	{"level":"warn","ts":"2025-11-01T11:59:33.282641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.328081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.347011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.396462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.414730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.440717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.456017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.471988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.521840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.540204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.597006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.619995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.657681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.695266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.697613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.718190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.760427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.832629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.868236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.922302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.949899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:33.973028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:34.017995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:59:34.248328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T11:59:45.243282Z","caller":"traceutil/trace.go:172","msg":"trace[1377310251] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"116.191334ms","start":"2025-11-01T11:59:45.127075Z","end":"2025-11-01T11:59:45.243266Z","steps":["trace[1377310251] 'process raft request'  (duration: 116.11441ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:00:43 up  3:43,  0 user,  load average: 4.95, 3.74, 2.89
	Linux embed-certs-816860 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b8547bc6fe247483af96aa20a9f69f9ac97e0f099d004e6153a38a1b3753efab] <==
	I1101 11:59:46.628436       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 11:59:46.628652       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 11:59:46.628772       1 main.go:148] setting mtu 1500 for CNI 
	I1101 11:59:46.628791       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 11:59:46.628801       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T11:59:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 11:59:46.829247       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 11:59:46.829317       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 11:59:46.829350       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 11:59:46.830148       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 12:00:16.829946       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 12:00:16.829946       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 12:00:16.830048       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 12:00:16.830121       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 12:00:18.329527       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 12:00:18.329634       1 metrics.go:72] Registering metrics
	I1101 12:00:18.329774       1 controller.go:711] "Syncing nftables rules"
	I1101 12:00:26.835767       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 12:00:26.835916       1 main.go:301] handling current node
	I1101 12:00:36.829758       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 12:00:36.829872       1 main.go:301] handling current node
	
	
	==> kube-apiserver [eb3ab03ad521e53061329a288a2958d7ede3614f67a7ed39212a6abb8675c7d5] <==
	E1101 11:59:36.400963       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1101 11:59:36.416198       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 11:59:36.459796       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:59:36.461846       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1101 11:59:36.468032       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1101 11:59:36.501270       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:59:36.502710       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 11:59:36.686514       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 11:59:36.701589       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 11:59:36.717670       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 11:59:36.717753       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 11:59:37.995012       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:59:38.058755       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:59:38.170966       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 11:59:38.180093       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 11:59:38.181490       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 11:59:38.187250       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 11:59:38.987036       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 11:59:39.334202       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 11:59:39.375048       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 11:59:39.421618       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 11:59:44.081667       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:59:44.088117       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 11:59:44.758305       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 11:59:45.091067       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [29b84653941db56c6216a58e88571b8915ef955c78c458725506bd4b33ddaf0f] <==
	I1101 11:59:43.996115       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 11:59:44.001804       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 11:59:44.001887       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 11:59:44.001969       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 11:59:44.002792       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 11:59:44.002861       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 11:59:44.002943       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 11:59:44.002979       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 11:59:44.003020       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 11:59:44.003253       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 11:59:44.003982       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 11:59:44.005201       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 11:59:44.005315       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 11:59:44.005409       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-816860"
	I1101 11:59:44.005474       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 11:59:44.006826       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 11:59:44.006872       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:59:44.014163       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 11:59:44.015866       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 11:59:44.015928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 11:59:44.026675       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 11:59:44.026773       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 11:59:44.039521       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 11:59:44.042312       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 12:00:29.059468       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [123d27618ddba8bb3545fcdf02f0691488a7a2498763d3adb13a84931783f7c4] <==
	I1101 11:59:46.512676       1 server_linux.go:53] "Using iptables proxy"
	I1101 11:59:46.607701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:59:46.711797       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:59:46.711848       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 11:59:46.711987       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:59:46.753183       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:59:46.753297       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:59:46.757300       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:59:46.757778       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:59:46.757975       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:59:46.759986       1 config.go:200] "Starting service config controller"
	I1101 11:59:46.760038       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:59:46.760100       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:59:46.760130       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:59:46.760178       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:59:46.760206       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:59:46.764053       1 config.go:309] "Starting node config controller"
	I1101 11:59:46.765740       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:59:46.765763       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:59:46.860578       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:59:46.860589       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:59:46.860611       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [785bdd8eaea4d6ee0d739bd6bb4baad7615fd04f998a925b6d278d325aa2db4b] <==
	E1101 11:59:36.444475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 11:59:36.444652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 11:59:36.444751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 11:59:36.444953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 11:59:36.445055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 11:59:36.445163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 11:59:36.446958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 11:59:36.447085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 11:59:36.447185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 11:59:36.447376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 11:59:36.447626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 11:59:36.447751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 11:59:36.447882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 11:59:36.447905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 11:59:36.456183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 11:59:37.313972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 11:59:37.386979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 11:59:37.426044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 11:59:37.429988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 11:59:37.447253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 11:59:37.457170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 11:59:37.472117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 11:59:37.507782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 11:59:37.830512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 11:59:39.806808       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: E1101 11:59:45.367615    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-zmkct\" is forbidden: User \"system:node:embed-certs-816860\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-816860' and this object" podUID="e84bf106-0b04-4eb0-b1a5-fd02fe9447ce" pod="kube-system/kindnet-zmkct"
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: E1101 11:59:45.368233    1305 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-816860\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-816860' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: I1101 11:59:45.392336    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e84bf106-0b04-4eb0-b1a5-fd02fe9447ce-lib-modules\") pod \"kindnet-zmkct\" (UID: \"e84bf106-0b04-4eb0-b1a5-fd02fe9447ce\") " pod="kube-system/kindnet-zmkct"
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: I1101 11:59:45.392386    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flgwv\" (UniqueName: \"kubernetes.io/projected/e84bf106-0b04-4eb0-b1a5-fd02fe9447ce-kube-api-access-flgwv\") pod \"kindnet-zmkct\" (UID: \"e84bf106-0b04-4eb0-b1a5-fd02fe9447ce\") " pod="kube-system/kindnet-zmkct"
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: I1101 11:59:45.392413    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e84bf106-0b04-4eb0-b1a5-fd02fe9447ce-cni-cfg\") pod \"kindnet-zmkct\" (UID: \"e84bf106-0b04-4eb0-b1a5-fd02fe9447ce\") " pod="kube-system/kindnet-zmkct"
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: I1101 11:59:45.392435    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e84bf106-0b04-4eb0-b1a5-fd02fe9447ce-xtables-lock\") pod \"kindnet-zmkct\" (UID: \"e84bf106-0b04-4eb0-b1a5-fd02fe9447ce\") " pod="kube-system/kindnet-zmkct"
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: I1101 11:59:45.493801    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/105f4e25-c2c1-40ce-9ca4-b9327682eb0a-lib-modules\") pod \"kube-proxy-q5757\" (UID: \"105f4e25-c2c1-40ce-9ca4-b9327682eb0a\") " pod="kube-system/kube-proxy-q5757"
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: I1101 11:59:45.493885    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lcj6\" (UniqueName: \"kubernetes.io/projected/105f4e25-c2c1-40ce-9ca4-b9327682eb0a-kube-api-access-4lcj6\") pod \"kube-proxy-q5757\" (UID: \"105f4e25-c2c1-40ce-9ca4-b9327682eb0a\") " pod="kube-system/kube-proxy-q5757"
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: I1101 11:59:45.493923    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/105f4e25-c2c1-40ce-9ca4-b9327682eb0a-xtables-lock\") pod \"kube-proxy-q5757\" (UID: \"105f4e25-c2c1-40ce-9ca4-b9327682eb0a\") " pod="kube-system/kube-proxy-q5757"
	Nov 01 11:59:45 embed-certs-816860 kubelet[1305]: I1101 11:59:45.494026    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/105f4e25-c2c1-40ce-9ca4-b9327682eb0a-kube-proxy\") pod \"kube-proxy-q5757\" (UID: \"105f4e25-c2c1-40ce-9ca4-b9327682eb0a\") " pod="kube-system/kube-proxy-q5757"
	Nov 01 11:59:46 embed-certs-816860 kubelet[1305]: I1101 11:59:46.261304    1305 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 11:59:46 embed-certs-816860 kubelet[1305]: W1101 11:59:46.340078    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/crio-6ebfb24f63ae17ccb6cdd8486aefc0e01c7d5b1fcd01e70e424403b4174450f1 WatchSource:0}: Error finding container 6ebfb24f63ae17ccb6cdd8486aefc0e01c7d5b1fcd01e70e424403b4174450f1: Status 404 returned error can't find the container with id 6ebfb24f63ae17ccb6cdd8486aefc0e01c7d5b1fcd01e70e424403b4174450f1
	Nov 01 11:59:46 embed-certs-816860 kubelet[1305]: W1101 11:59:46.554856    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/crio-72509e563b5ead70f61afaf4362591dcd4162e52769de5a396366da9be57c487 WatchSource:0}: Error finding container 72509e563b5ead70f61afaf4362591dcd4162e52769de5a396366da9be57c487: Status 404 returned error can't find the container with id 72509e563b5ead70f61afaf4362591dcd4162e52769de5a396366da9be57c487
	Nov 01 11:59:46 embed-certs-816860 kubelet[1305]: I1101 11:59:46.726284    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zmkct" podStartSLOduration=1.72626412 podStartE2EDuration="1.72626412s" podCreationTimestamp="2025-11-01 11:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:59:46.706731216 +0000 UTC m=+7.509496098" watchObservedRunningTime="2025-11-01 11:59:46.72626412 +0000 UTC m=+7.529028986"
	Nov 01 12:00:27 embed-certs-816860 kubelet[1305]: I1101 12:00:27.375552    1305 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 12:00:27 embed-certs-816860 kubelet[1305]: I1101 12:00:27.420551    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q5757" podStartSLOduration=42.420533701 podStartE2EDuration="42.420533701s" podCreationTimestamp="2025-11-01 11:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 11:59:46.727385562 +0000 UTC m=+7.530150445" watchObservedRunningTime="2025-11-01 12:00:27.420533701 +0000 UTC m=+48.223298567"
	Nov 01 12:00:27 embed-certs-816860 kubelet[1305]: I1101 12:00:27.609348    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bb93e4fb-e7b0-49ed-8abb-9842fc9950c6-tmp\") pod \"storage-provisioner\" (UID: \"bb93e4fb-e7b0-49ed-8abb-9842fc9950c6\") " pod="kube-system/storage-provisioner"
	Nov 01 12:00:27 embed-certs-816860 kubelet[1305]: I1101 12:00:27.609407    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8pc\" (UniqueName: \"kubernetes.io/projected/bb93e4fb-e7b0-49ed-8abb-9842fc9950c6-kube-api-access-bm8pc\") pod \"storage-provisioner\" (UID: \"bb93e4fb-e7b0-49ed-8abb-9842fc9950c6\") " pod="kube-system/storage-provisioner"
	Nov 01 12:00:27 embed-certs-816860 kubelet[1305]: I1101 12:00:27.609433    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27152cf3-def0-4a5e-baae-3dcead2874e2-config-volume\") pod \"coredns-66bc5c9577-4d2b7\" (UID: \"27152cf3-def0-4a5e-baae-3dcead2874e2\") " pod="kube-system/coredns-66bc5c9577-4d2b7"
	Nov 01 12:00:27 embed-certs-816860 kubelet[1305]: I1101 12:00:27.609453    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48p57\" (UniqueName: \"kubernetes.io/projected/27152cf3-def0-4a5e-baae-3dcead2874e2-kube-api-access-48p57\") pod \"coredns-66bc5c9577-4d2b7\" (UID: \"27152cf3-def0-4a5e-baae-3dcead2874e2\") " pod="kube-system/coredns-66bc5c9577-4d2b7"
	Nov 01 12:00:28 embed-certs-816860 kubelet[1305]: W1101 12:00:28.093848    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/crio-efbc1b6cf6bdb99b9a8e0d6c27c866cc3002f774628dacfc73b030549e8183eb WatchSource:0}: Error finding container efbc1b6cf6bdb99b9a8e0d6c27c866cc3002f774628dacfc73b030549e8183eb: Status 404 returned error can't find the container with id efbc1b6cf6bdb99b9a8e0d6c27c866cc3002f774628dacfc73b030549e8183eb
	Nov 01 12:00:28 embed-certs-816860 kubelet[1305]: I1101 12:00:28.823402    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.823372464 podStartE2EDuration="42.823372464s" podCreationTimestamp="2025-11-01 11:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:00:28.82298873 +0000 UTC m=+49.625753604" watchObservedRunningTime="2025-11-01 12:00:28.823372464 +0000 UTC m=+49.626137322"
	Nov 01 12:00:31 embed-certs-816860 kubelet[1305]: I1101 12:00:31.202543    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4d2b7" podStartSLOduration=46.202525002 podStartE2EDuration="46.202525002s" podCreationTimestamp="2025-11-01 11:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:00:28.846914044 +0000 UTC m=+49.649678927" watchObservedRunningTime="2025-11-01 12:00:31.202525002 +0000 UTC m=+52.005289868"
	Nov 01 12:00:31 embed-certs-816860 kubelet[1305]: I1101 12:00:31.342180    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhvjz\" (UniqueName: \"kubernetes.io/projected/e41d4e23-ef87-4bf1-a0d7-6261913ab0ec-kube-api-access-nhvjz\") pod \"busybox\" (UID: \"e41d4e23-ef87-4bf1-a0d7-6261913ab0ec\") " pod="default/busybox"
	Nov 01 12:00:31 embed-certs-816860 kubelet[1305]: W1101 12:00:31.561424    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/crio-34c43a43a5707028d1e72485eec646b9cc4edd4182c8ad3de272cfa78a8f48b0 WatchSource:0}: Error finding container 34c43a43a5707028d1e72485eec646b9cc4edd4182c8ad3de272cfa78a8f48b0: Status 404 returned error can't find the container with id 34c43a43a5707028d1e72485eec646b9cc4edd4182c8ad3de272cfa78a8f48b0
	
	
	==> storage-provisioner [bd01860a612b2efbfe3a3b33e4a8d58c31de9363d6b169c6d1c4ee9bbd12c063] <==
	I1101 12:00:28.178143       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 12:00:28.211054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 12:00:28.211202       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 12:00:28.218569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:28.226725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:00:28.226878       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 12:00:28.229161       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-816860_ce19e599-eaf8-4138-bb37-b71b41589af4!
	I1101 12:00:28.242948       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51749ed9-875c-4abb-abce-2d05599a8ef5", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-816860_ce19e599-eaf8-4138-bb37-b71b41589af4 became leader
	W1101 12:00:28.245858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:28.302492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:00:28.329293       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-816860_ce19e599-eaf8-4138-bb37-b71b41589af4!
	W1101 12:00:30.306759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:30.314978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:32.318462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:32.331507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:34.335072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:34.341863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:36.344942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:36.352214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:38.356204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:38.363256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:40.366134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:40.374803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:42.381407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:00:42.391710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-816860 -n embed-certs-816860
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-816860 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-198717 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-198717 --alsologtostderr -v=1: exit status 80 (2.279922043s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-198717 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 12:01:20.496049  733793 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:01:20.497206  733793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:01:20.497224  733793 out.go:374] Setting ErrFile to fd 2...
	I1101 12:01:20.497232  733793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:01:20.497664  733793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:01:20.498150  733793 out.go:368] Setting JSON to false
	I1101 12:01:20.498193  733793 mustload.go:66] Loading cluster: no-preload-198717
	I1101 12:01:20.498868  733793 config.go:182] Loaded profile config "no-preload-198717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:01:20.499673  733793 cli_runner.go:164] Run: docker container inspect no-preload-198717 --format={{.State.Status}}
	I1101 12:01:20.525789  733793 host.go:66] Checking if "no-preload-198717" exists ...
	I1101 12:01:20.526126  733793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:01:20.641142  733793 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 12:01:20.628246784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:01:20.642093  733793 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-198717 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 12:01:20.646198  733793 out.go:179] * Pausing node no-preload-198717 ... 
	I1101 12:01:20.649198  733793 host.go:66] Checking if "no-preload-198717" exists ...
	I1101 12:01:20.649560  733793 ssh_runner.go:195] Run: systemctl --version
	I1101 12:01:20.649609  733793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-198717
	I1101 12:01:20.677667  733793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/no-preload-198717/id_rsa Username:docker}
	I1101 12:01:20.789875  733793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:01:20.814245  733793 pause.go:52] kubelet running: true
	I1101 12:01:20.814319  733793 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:01:21.193306  733793 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:01:21.193392  733793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:01:21.292877  733793 cri.go:89] found id: "c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3"
	I1101 12:01:21.292908  733793 cri.go:89] found id: "c2cce036682945a016dcf6de8c5b63c2797f50b4a3bde0c30d10229ce295a9df"
	I1101 12:01:21.292914  733793 cri.go:89] found id: "6fb0dc993fbe24c714882e89d86ada6b3ba240cf813b4528e24730daf7e3b3d8"
	I1101 12:01:21.292919  733793 cri.go:89] found id: "b38d516da8a6e0ae3a719ac17f02835460fe309ee364bdff5c0ab79163282caa"
	I1101 12:01:21.292939  733793 cri.go:89] found id: "a22b83973f57d185b05c922046586076ab67a6c7b4b442258a7b45e95082a942"
	I1101 12:01:21.292948  733793 cri.go:89] found id: "4146bebcfc78fff7e205d15a351a3b9489d9f1d7f2ce428d242490a4a9a214da"
	I1101 12:01:21.292951  733793 cri.go:89] found id: "f3772f41e725d1af7e862ae449d7118696e53f3be37b8779faa9d26f954875a8"
	I1101 12:01:21.292954  733793 cri.go:89] found id: "9d24638b6e39f00dc4f5ad46eade0ee4467aa0d861d222443a6b43a6ccaaf579"
	I1101 12:01:21.292958  733793 cri.go:89] found id: "41bc0ffb4ace7b78b5269921a034d897960eed08f17125d3ab8c8df9c3a224fd"
	I1101 12:01:21.292966  733793 cri.go:89] found id: "d024886bd481f8d502061d838e84fae7dc51337055c12ff4c38953b14cd50712"
	I1101 12:01:21.292978  733793 cri.go:89] found id: "1b419ba60a9359ae536898d052ea0d4354e52910cfd2d21aa073abc9c568c354"
	I1101 12:01:21.292982  733793 cri.go:89] found id: ""
	I1101 12:01:21.293047  733793 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:01:21.313403  733793 retry.go:31] will retry after 207.525249ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:01:21Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:01:21.521915  733793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:01:21.542779  733793 pause.go:52] kubelet running: false
	I1101 12:01:21.542932  733793 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:01:21.880126  733793 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:01:21.880249  733793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:01:22.003801  733793 cri.go:89] found id: "c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3"
	I1101 12:01:22.003824  733793 cri.go:89] found id: "c2cce036682945a016dcf6de8c5b63c2797f50b4a3bde0c30d10229ce295a9df"
	I1101 12:01:22.003909  733793 cri.go:89] found id: "6fb0dc993fbe24c714882e89d86ada6b3ba240cf813b4528e24730daf7e3b3d8"
	I1101 12:01:22.003915  733793 cri.go:89] found id: "b38d516da8a6e0ae3a719ac17f02835460fe309ee364bdff5c0ab79163282caa"
	I1101 12:01:22.003919  733793 cri.go:89] found id: "a22b83973f57d185b05c922046586076ab67a6c7b4b442258a7b45e95082a942"
	I1101 12:01:22.003930  733793 cri.go:89] found id: "4146bebcfc78fff7e205d15a351a3b9489d9f1d7f2ce428d242490a4a9a214da"
	I1101 12:01:22.003934  733793 cri.go:89] found id: "f3772f41e725d1af7e862ae449d7118696e53f3be37b8779faa9d26f954875a8"
	I1101 12:01:22.003938  733793 cri.go:89] found id: "9d24638b6e39f00dc4f5ad46eade0ee4467aa0d861d222443a6b43a6ccaaf579"
	I1101 12:01:22.003941  733793 cri.go:89] found id: "41bc0ffb4ace7b78b5269921a034d897960eed08f17125d3ab8c8df9c3a224fd"
	I1101 12:01:22.004003  733793 cri.go:89] found id: "d024886bd481f8d502061d838e84fae7dc51337055c12ff4c38953b14cd50712"
	I1101 12:01:22.004014  733793 cri.go:89] found id: "1b419ba60a9359ae536898d052ea0d4354e52910cfd2d21aa073abc9c568c354"
	I1101 12:01:22.004018  733793 cri.go:89] found id: ""
	I1101 12:01:22.004117  733793 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:01:22.030934  733793 retry.go:31] will retry after 249.536834ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:01:22Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:01:22.281311  733793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:01:22.297746  733793 pause.go:52] kubelet running: false
	I1101 12:01:22.297859  733793 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:01:22.551944  733793 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:01:22.552090  733793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:01:22.653308  733793 cri.go:89] found id: "c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3"
	I1101 12:01:22.653381  733793 cri.go:89] found id: "c2cce036682945a016dcf6de8c5b63c2797f50b4a3bde0c30d10229ce295a9df"
	I1101 12:01:22.653400  733793 cri.go:89] found id: "6fb0dc993fbe24c714882e89d86ada6b3ba240cf813b4528e24730daf7e3b3d8"
	I1101 12:01:22.653420  733793 cri.go:89] found id: "b38d516da8a6e0ae3a719ac17f02835460fe309ee364bdff5c0ab79163282caa"
	I1101 12:01:22.653440  733793 cri.go:89] found id: "a22b83973f57d185b05c922046586076ab67a6c7b4b442258a7b45e95082a942"
	I1101 12:01:22.653474  733793 cri.go:89] found id: "4146bebcfc78fff7e205d15a351a3b9489d9f1d7f2ce428d242490a4a9a214da"
	I1101 12:01:22.653492  733793 cri.go:89] found id: "f3772f41e725d1af7e862ae449d7118696e53f3be37b8779faa9d26f954875a8"
	I1101 12:01:22.653510  733793 cri.go:89] found id: "9d24638b6e39f00dc4f5ad46eade0ee4467aa0d861d222443a6b43a6ccaaf579"
	I1101 12:01:22.653529  733793 cri.go:89] found id: "41bc0ffb4ace7b78b5269921a034d897960eed08f17125d3ab8c8df9c3a224fd"
	I1101 12:01:22.653565  733793 cri.go:89] found id: "d024886bd481f8d502061d838e84fae7dc51337055c12ff4c38953b14cd50712"
	I1101 12:01:22.653585  733793 cri.go:89] found id: "1b419ba60a9359ae536898d052ea0d4354e52910cfd2d21aa073abc9c568c354"
	I1101 12:01:22.653604  733793 cri.go:89] found id: ""
	I1101 12:01:22.653702  733793 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:01:22.686375  733793 out.go:203] 
	W1101 12:01:22.690525  733793 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:01:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:01:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 12:01:22.690551  733793 out.go:285] * 
	* 
	W1101 12:01:22.698976  733793 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 12:01:22.702918  733793 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-198717 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-198717
helpers_test.go:243: (dbg) docker inspect no-preload-198717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d",
	        "Created": "2025-11-01T11:58:39.349581274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 728838,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:00:18.007283243Z",
	            "FinishedAt": "2025-11-01T12:00:17.165374882Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/hosts",
	        "LogPath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d-json.log",
	        "Name": "/no-preload-198717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-198717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-198717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d",
	                "LowerDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-198717",
	                "Source": "/var/lib/docker/volumes/no-preload-198717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-198717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-198717",
	                "name.minikube.sigs.k8s.io": "no-preload-198717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae88c64e9f38197d5472d95bc5b24b273eb4a23d7a09809ca0332f203992011",
	            "SandboxKey": "/var/run/docker/netns/cae88c64e9f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-198717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:16:fb:26:08:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "984f332f5e0d4cc9526af8fdf6f1a1ce27a9c2697f377b762d5103dc82663350",
	                    "EndpointID": "073e3c6e0c62359d9e8e69446ecd21395f2e83e52b29f09e7851fd3ccd40ced0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-198717",
	                        "c52fbb51f4c4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-198717 -n no-preload-198717
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-198717 -n no-preload-198717: exit status 2 (452.024555ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-198717 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-198717 logs -n 25: (1.726923434s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-505831 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-505831    │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p cert-options-505831                                                                                                                                                                                                                        │ cert-options-505831    │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │                     │
	│ stop    │ -p old-k8s-version-952358 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-952358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:58 UTC │
	│ image   │ old-k8s-version-952358 image list --format=json                                                                                                                                                                                               │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ pause   │ -p old-k8s-version-952358 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │                     │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-534694 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ delete  │ -p cert-expiration-534694                                                                                                                                                                                                                     │ cert-expiration-534694 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 11:59 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p no-preload-198717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p embed-certs-816860 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-816860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ image   │ no-preload-198717 image list --format=json                                                                                                                                                                                                    │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:00:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:00:57.612375  731627 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:00:57.612603  731627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:00:57.612617  731627 out.go:374] Setting ErrFile to fd 2...
	I1101 12:00:57.612622  731627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:00:57.612915  731627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:00:57.613466  731627 out.go:368] Setting JSON to false
	I1101 12:00:57.614539  731627 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13407,"bootTime":1761985051,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:00:57.614615  731627 start.go:143] virtualization:  
	I1101 12:00:57.617673  731627 out.go:179] * [embed-certs-816860] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:00:57.621609  731627 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:00:57.621728  731627 notify.go:221] Checking for updates...
	I1101 12:00:57.627658  731627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:00:57.630724  731627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:00:57.634164  731627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:00:57.637104  731627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:00:57.639949  731627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1101 12:00:53.906625  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	W1101 12:00:56.405635  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	I1101 12:00:57.643275  731627 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:00:57.643836  731627 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:00:57.671194  731627 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:00:57.671315  731627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:00:57.734667  731627 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:00:57.725433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:00:57.734778  731627 docker.go:319] overlay module found
	I1101 12:00:57.737832  731627 out.go:179] * Using the docker driver based on existing profile
	I1101 12:00:57.740670  731627 start.go:309] selected driver: docker
	I1101 12:00:57.740689  731627 start.go:930] validating driver "docker" against &{Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:00:57.740784  731627 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:00:57.741534  731627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:00:57.796786  731627 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:00:57.787224576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:00:57.797162  731627 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:00:57.797197  731627 cni.go:84] Creating CNI manager for ""
	I1101 12:00:57.797255  731627 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:00:57.797295  731627 start.go:353] cluster config:
	{Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:00:57.800454  731627 out.go:179] * Starting "embed-certs-816860" primary control-plane node in "embed-certs-816860" cluster
	I1101 12:00:57.803166  731627 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:00:57.806114  731627 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:00:57.808841  731627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:00:57.808901  731627 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:00:57.808914  731627 cache.go:59] Caching tarball of preloaded images
	I1101 12:00:57.808954  731627 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:00:57.809003  731627 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:00:57.809013  731627 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:00:57.809134  731627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/config.json ...
	I1101 12:00:57.828852  731627 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:00:57.828877  731627 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:00:57.828889  731627 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:00:57.829124  731627 start.go:360] acquireMachinesLock for embed-certs-816860: {Name:mkc466573abafda4e2b4a3754427ac01b3fcf9c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:57.829212  731627 start.go:364] duration metric: took 59.997µs to acquireMachinesLock for "embed-certs-816860"
	I1101 12:00:57.829236  731627 start.go:96] Skipping create...Using existing machine configuration
	I1101 12:00:57.829248  731627 fix.go:54] fixHost starting: 
	I1101 12:00:57.829521  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:00:57.851217  731627 fix.go:112] recreateIfNeeded on embed-certs-816860: state=Stopped err=<nil>
	W1101 12:00:57.851251  731627 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 12:00:57.854547  731627 out.go:252] * Restarting existing docker container for "embed-certs-816860" ...
	I1101 12:00:57.854657  731627 cli_runner.go:164] Run: docker start embed-certs-816860
	I1101 12:00:58.137233  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:00:58.160804  731627 kic.go:430] container "embed-certs-816860" state is running.
	I1101 12:00:58.161201  731627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-816860
	I1101 12:00:58.187919  731627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/config.json ...
	I1101 12:00:58.188150  731627 machine.go:94] provisionDockerMachine start ...
	I1101 12:00:58.188260  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:00:58.213530  731627 main.go:143] libmachine: Using SSH client type: native
	I1101 12:00:58.214364  731627 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33800 <nil> <nil>}
	I1101 12:00:58.214395  731627 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:00:58.215086  731627 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34348->127.0.0.1:33800: read: connection reset by peer
	I1101 12:01:01.389913  731627 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-816860
	
	I1101 12:01:01.389938  731627 ubuntu.go:182] provisioning hostname "embed-certs-816860"
	I1101 12:01:01.390007  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:01.413359  731627 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:01.413677  731627 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33800 <nil> <nil>}
	I1101 12:01:01.413727  731627 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-816860 && echo "embed-certs-816860" | sudo tee /etc/hostname
	I1101 12:01:01.584891  731627 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-816860
	
	I1101 12:01:01.585016  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:01.604941  731627 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:01.605263  731627 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33800 <nil> <nil>}
	I1101 12:01:01.605286  731627 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-816860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-816860/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-816860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:01:01.767010  731627 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:01:01.767040  731627 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:01:01.767066  731627 ubuntu.go:190] setting up certificates
	I1101 12:01:01.767081  731627 provision.go:84] configureAuth start
	I1101 12:01:01.767147  731627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-816860
	I1101 12:01:01.788108  731627 provision.go:143] copyHostCerts
	I1101 12:01:01.788220  731627 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:01:01.788243  731627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:01:01.788331  731627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:01:01.788444  731627 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:01:01.788457  731627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:01:01.788491  731627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:01:01.788568  731627 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:01:01.788577  731627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:01:01.788605  731627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:01:01.788667  731627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.embed-certs-816860 san=[127.0.0.1 192.168.76.2 embed-certs-816860 localhost minikube]
	I1101 12:01:02.026667  731627 provision.go:177] copyRemoteCerts
	I1101 12:01:02.026737  731627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:01:02.026788  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.048684  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:02.159394  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:01:02.178018  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 12:01:02.199813  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 12:01:02.218417  731627 provision.go:87] duration metric: took 451.312839ms to configureAuth
	I1101 12:01:02.218489  731627 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:01:02.218719  731627 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:01:02.218846  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.238660  731627 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:02.238973  731627 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33800 <nil> <nil>}
	I1101 12:01:02.238996  731627 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:01:02.565843  731627 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:01:02.565879  731627 machine.go:97] duration metric: took 4.377704s to provisionDockerMachine
	I1101 12:01:02.565891  731627 start.go:293] postStartSetup for "embed-certs-816860" (driver="docker")
	I1101 12:01:02.565902  731627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:01:02.565962  731627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:01:02.566016  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.591203  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	W1101 12:00:58.407230  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	W1101 12:01:00.407393  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	W1101 12:01:02.408279  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	I1101 12:01:02.703391  731627 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:01:02.707347  731627 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:01:02.707380  731627 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:01:02.707393  731627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:01:02.707449  731627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:01:02.707530  731627 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:01:02.707642  731627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:01:02.715527  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:01:02.734159  731627 start.go:296] duration metric: took 168.252806ms for postStartSetup
	I1101 12:01:02.734245  731627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:01:02.734288  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.752487  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:02.854843  731627 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:01:02.859786  731627 fix.go:56] duration metric: took 5.030530728s for fixHost
	I1101 12:01:02.859859  731627 start.go:83] releasing machines lock for "embed-certs-816860", held for 5.030633629s
	I1101 12:01:02.859966  731627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-816860
	I1101 12:01:02.876651  731627 ssh_runner.go:195] Run: cat /version.json
	I1101 12:01:02.876705  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.876976  731627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:01:02.877043  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.900182  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:02.917776  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:03.109859  731627 ssh_runner.go:195] Run: systemctl --version
	I1101 12:01:03.116332  731627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:01:03.159386  731627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:01:03.163964  731627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:01:03.164089  731627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:01:03.171979  731627 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 12:01:03.172051  731627 start.go:496] detecting cgroup driver to use...
	I1101 12:01:03.172091  731627 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:01:03.172139  731627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:01:03.189836  731627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:01:03.203370  731627 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:01:03.203434  731627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:01:03.219336  731627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:01:03.232879  731627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:01:03.357256  731627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:01:03.503670  731627 docker.go:234] disabling docker service ...
	I1101 12:01:03.503802  731627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:01:03.521638  731627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:01:03.539484  731627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:01:03.671883  731627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:01:03.806812  731627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:01:03.819565  731627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:01:03.836043  731627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:01:03.836153  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.845576  731627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:01:03.845731  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.855646  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.864558  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.873826  731627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:01:03.881966  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.891375  731627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.900269  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.911357  731627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:01:03.919458  731627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:01:03.927014  731627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:04.062099  731627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:01:04.204231  731627 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:01:04.204300  731627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:01:04.208824  731627 start.go:564] Will wait 60s for crictl version
	I1101 12:01:04.208890  731627 ssh_runner.go:195] Run: which crictl
	I1101 12:01:04.216365  731627 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:01:04.261066  731627 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:01:04.261242  731627 ssh_runner.go:195] Run: crio --version
	I1101 12:01:04.298042  731627 ssh_runner.go:195] Run: crio --version
	I1101 12:01:04.331591  731627 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:01:04.334641  731627 cli_runner.go:164] Run: docker network inspect embed-certs-816860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:01:04.349789  731627 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 12:01:04.354050  731627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:01:04.364105  731627 kubeadm.go:884] updating cluster {Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:01:04.364229  731627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:01:04.364292  731627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:01:04.407954  731627 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:01:04.407978  731627 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:01:04.408142  731627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:01:04.436567  731627 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:01:04.436593  731627 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:01:04.436604  731627 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 12:01:04.436708  731627 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-816860 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:01:04.436794  731627 ssh_runner.go:195] Run: crio config
	I1101 12:01:04.502362  731627 cni.go:84] Creating CNI manager for ""
	I1101 12:01:04.502439  731627 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:01:04.502475  731627 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 12:01:04.502529  731627 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-816860 NodeName:embed-certs-816860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:01:04.502716  731627 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-816860"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:01:04.502834  731627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:01:04.511697  731627 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:01:04.511770  731627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:01:04.519628  731627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 12:01:04.533540  731627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:01:04.546320  731627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 12:01:04.559739  731627 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:01:04.563482  731627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:01:04.573039  731627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:04.690691  731627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:01:04.712559  731627 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860 for IP: 192.168.76.2
	I1101 12:01:04.712589  731627 certs.go:195] generating shared ca certs ...
	I1101 12:01:04.712611  731627 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:04.712784  731627 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:01:04.712852  731627 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:01:04.712865  731627 certs.go:257] generating profile certs ...
	I1101 12:01:04.712969  731627 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/client.key
	I1101 12:01:04.713044  731627 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key.a2d2a5ad
	I1101 12:01:04.713090  731627 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.key
	I1101 12:01:04.713239  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:01:04.713292  731627 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:01:04.713306  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:01:04.713341  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:01:04.713374  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:01:04.713400  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:01:04.713462  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:01:04.714160  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:01:04.738166  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:01:04.759395  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:01:04.782976  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:01:04.804594  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 12:01:04.831193  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:01:04.857572  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:01:04.881643  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 12:01:04.916879  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:01:04.946379  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:01:04.967560  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:01:04.989530  731627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:01:05.008874  731627 ssh_runner.go:195] Run: openssl version
	I1101 12:01:05.016101  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:01:05.025620  731627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:01:05.029807  731627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:01:05.029885  731627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:01:05.079065  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:01:05.087864  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:01:05.097196  731627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:05.101231  731627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:05.101300  731627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:05.142889  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:01:05.151442  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:01:05.160670  731627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:01:05.164940  731627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:01:05.165017  731627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:01:05.208846  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:01:05.218763  731627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:01:05.222649  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 12:01:05.264145  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 12:01:05.305838  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 12:01:05.346654  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 12:01:05.408976  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 12:01:05.465420  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 12:01:05.532471  731627 kubeadm.go:401] StartCluster: {Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:01:05.532617  731627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:01:05.532717  731627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:01:05.599135  731627 cri.go:89] found id: "4416bc807f95ffaf24502c304ef9bf5001bd9ddd88301f4d6ef400ff3ea5432f"
	I1101 12:01:05.599195  731627 cri.go:89] found id: "39845a318c12b6c98d99ddf6ea6186a7059c3166814d00af6cd36c5405b346ee"
	I1101 12:01:05.599225  731627 cri.go:89] found id: "a5482a73b20973808dd11c20a8e8b069545e2025ad3b9520ef1f963f7620528c"
	I1101 12:01:05.599253  731627 cri.go:89] found id: "4db70ce1adcd4501c22be41653a3f58f27a96d77e7f80060e3212521fb73acd6"
	I1101 12:01:05.599271  731627 cri.go:89] found id: ""
	I1101 12:01:05.599350  731627 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 12:01:05.629968  731627 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:01:05Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:01:05.630105  731627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:01:05.647043  731627 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 12:01:05.647113  731627 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 12:01:05.647194  731627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 12:01:05.659709  731627 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 12:01:05.660386  731627 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-816860" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:01:05.660711  731627 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-816860" cluster setting kubeconfig missing "embed-certs-816860" context setting]
	I1101 12:01:05.661323  731627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:05.663073  731627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 12:01:05.679995  731627 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 12:01:05.680082  731627 kubeadm.go:602] duration metric: took 32.936097ms to restartPrimaryControlPlane
	I1101 12:01:05.680107  731627 kubeadm.go:403] duration metric: took 147.645467ms to StartCluster
	I1101 12:01:05.680153  731627 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:05.680251  731627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:01:05.682591  731627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:05.682933  731627 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:01:05.683177  731627 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:01:05.683223  731627 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:01:05.683310  731627 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-816860"
	I1101 12:01:05.683330  731627 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-816860"
	W1101 12:01:05.683354  731627 addons.go:248] addon storage-provisioner should already be in state true
	I1101 12:01:05.683387  731627 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 12:01:05.683840  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:01:05.684010  731627 addons.go:70] Setting dashboard=true in profile "embed-certs-816860"
	I1101 12:01:05.684030  731627 addons.go:239] Setting addon dashboard=true in "embed-certs-816860"
	W1101 12:01:05.684037  731627 addons.go:248] addon dashboard should already be in state true
	I1101 12:01:05.684073  731627 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 12:01:05.684472  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:01:05.685862  731627 addons.go:70] Setting default-storageclass=true in profile "embed-certs-816860"
	I1101 12:01:05.685896  731627 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-816860"
	I1101 12:01:05.686204  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:01:05.695845  731627 out.go:179] * Verifying Kubernetes components...
	I1101 12:01:05.699051  731627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:05.740559  731627 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 12:01:05.740630  731627 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 12:01:05.743325  731627 addons.go:239] Setting addon default-storageclass=true in "embed-certs-816860"
	W1101 12:01:05.743348  731627 addons.go:248] addon default-storageclass should already be in state true
	I1101 12:01:05.743372  731627 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 12:01:05.743778  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:01:05.743920  731627 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:01:05.743934  731627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:01:05.743970  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:05.747436  731627 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 12:01:05.750281  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 12:01:05.750303  731627 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 12:01:05.750371  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:05.783620  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:05.810012  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:05.810921  731627 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:01:05.810936  731627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:01:05.810991  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:05.839501  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:06.049419  731627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:01:06.061799  731627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:01:06.079546  731627 node_ready.go:35] waiting up to 6m0s for node "embed-certs-816860" to be "Ready" ...
	I1101 12:01:06.144364  731627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:01:06.148422  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 12:01:06.148443  731627 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 12:01:06.231018  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 12:01:06.231039  731627 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 12:01:06.276107  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 12:01:06.276134  731627 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 12:01:06.320442  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 12:01:06.320512  731627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 12:01:06.377493  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 12:01:06.377556  731627 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 12:01:06.412516  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 12:01:06.412641  731627 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 12:01:06.472774  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 12:01:06.472840  731627 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 12:01:06.496604  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 12:01:06.496674  731627 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 12:01:06.516592  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:01:06.516665  731627 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 12:01:06.542945  731627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 12:01:04.914307  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	I1101 12:01:06.905479  728709 pod_ready.go:94] pod "coredns-66bc5c9577-s7p9w" is "Ready"
	I1101 12:01:06.905501  728709 pod_ready.go:86] duration metric: took 33.004992858s for pod "coredns-66bc5c9577-s7p9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.912649  728709 pod_ready.go:83] waiting for pod "etcd-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.917374  728709 pod_ready.go:94] pod "etcd-no-preload-198717" is "Ready"
	I1101 12:01:06.917395  728709 pod_ready.go:86] duration metric: took 4.672963ms for pod "etcd-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.924016  728709 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.928133  728709 pod_ready.go:94] pod "kube-apiserver-no-preload-198717" is "Ready"
	I1101 12:01:06.928155  728709 pod_ready.go:86] duration metric: took 4.115647ms for pod "kube-apiserver-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.931274  728709 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:07.103523  728709 pod_ready.go:94] pod "kube-controller-manager-no-preload-198717" is "Ready"
	I1101 12:01:07.103598  728709 pod_ready.go:86] duration metric: took 172.304485ms for pod "kube-controller-manager-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:07.303761  728709 pod_ready.go:83] waiting for pod "kube-proxy-tlh2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:07.705641  728709 pod_ready.go:94] pod "kube-proxy-tlh2v" is "Ready"
	I1101 12:01:07.705671  728709 pod_ready.go:86] duration metric: took 401.835688ms for pod "kube-proxy-tlh2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:07.904346  728709 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:08.318740  728709 pod_ready.go:94] pod "kube-scheduler-no-preload-198717" is "Ready"
	I1101 12:01:08.318769  728709 pod_ready.go:86] duration metric: took 414.394636ms for pod "kube-scheduler-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:08.318781  728709 pod_ready.go:40] duration metric: took 34.423315708s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:01:08.421847  728709 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:01:08.424753  728709 out.go:179] * Done! kubectl is now configured to use "no-preload-198717" cluster and "default" namespace by default
	I1101 12:01:11.767283  731627 node_ready.go:49] node "embed-certs-816860" is "Ready"
	I1101 12:01:11.767312  731627 node_ready.go:38] duration metric: took 5.687689685s for node "embed-certs-816860" to be "Ready" ...
	I1101 12:01:11.767327  731627 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:01:11.767389  731627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:01:13.502388  731627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.440510551s)
	I1101 12:01:13.502465  731627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.358082852s)
	I1101 12:01:13.502824  731627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.959796265s)
	I1101 12:01:13.503101  731627 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.73570021s)
	I1101 12:01:13.503137  731627 api_server.go:72] duration metric: took 7.820139847s to wait for apiserver process to appear ...
	I1101 12:01:13.503145  731627 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:01:13.503159  731627 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 12:01:13.506774  731627 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-816860 addons enable metrics-server
	
	I1101 12:01:13.518608  731627 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 12:01:13.520279  731627 api_server.go:141] control plane version: v1.34.1
	I1101 12:01:13.520305  731627 api_server.go:131] duration metric: took 17.15374ms to wait for apiserver health ...
	I1101 12:01:13.520315  731627 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:01:13.524985  731627 system_pods.go:59] 8 kube-system pods found
	I1101 12:01:13.525033  731627 system_pods.go:61] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:01:13.525047  731627 system_pods.go:61] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:01:13.525054  731627 system_pods.go:61] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:01:13.525067  731627 system_pods.go:61] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:01:13.525074  731627 system_pods.go:61] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:01:13.525079  731627 system_pods.go:61] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:01:13.525092  731627 system_pods.go:61] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:01:13.525098  731627 system_pods.go:61] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Running
	I1101 12:01:13.525104  731627 system_pods.go:74] duration metric: took 4.78366ms to wait for pod list to return data ...
	I1101 12:01:13.525128  731627 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:01:13.532908  731627 default_sa.go:45] found service account: "default"
	I1101 12:01:13.532936  731627 default_sa.go:55] duration metric: took 7.801865ms for default service account to be created ...
	I1101 12:01:13.532949  731627 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 12:01:13.543066  731627 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 12:01:13.545776  731627 addons.go:515] duration metric: took 7.86254325s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 12:01:13.625237  731627 system_pods.go:86] 8 kube-system pods found
	I1101 12:01:13.625279  731627 system_pods.go:89] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:01:13.625288  731627 system_pods.go:89] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:01:13.625295  731627 system_pods.go:89] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:01:13.625302  731627 system_pods.go:89] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:01:13.625308  731627 system_pods.go:89] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:01:13.625318  731627 system_pods.go:89] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:01:13.625325  731627 system_pods.go:89] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:01:13.625329  731627 system_pods.go:89] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Running
	I1101 12:01:13.625336  731627 system_pods.go:126] duration metric: took 92.38187ms to wait for k8s-apps to be running ...
	I1101 12:01:13.625344  731627 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 12:01:13.625400  731627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:01:13.662917  731627 system_svc.go:56] duration metric: took 37.562725ms WaitForService to wait for kubelet
	I1101 12:01:13.662955  731627 kubeadm.go:587] duration metric: took 7.979968841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:01:13.662975  731627 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:01:13.666848  731627 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:01:13.666876  731627 node_conditions.go:123] node cpu capacity is 2
	I1101 12:01:13.666889  731627 node_conditions.go:105] duration metric: took 3.908301ms to run NodePressure ...
	I1101 12:01:13.666903  731627 start.go:242] waiting for startup goroutines ...
	I1101 12:01:13.666911  731627 start.go:247] waiting for cluster config update ...
	I1101 12:01:13.666922  731627 start.go:256] writing updated cluster config ...
	I1101 12:01:13.667208  731627 ssh_runner.go:195] Run: rm -f paused
	I1101 12:01:13.671889  731627 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:01:13.676422  731627 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4d2b7" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 12:01:15.690231  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:18.193020  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:20.195295  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.196117826Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1cf66726-4df3-437b-879f-29893fc5a6d8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.197271474Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d34b51ff-8d90-4376-ae06-9a3ae9c0b210 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.197433429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.210857159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.211063028Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f99acf74779d9fe04db4ade6a05946b772d16234a57a15a0e47ec7c504fe1084/merged/etc/passwd: no such file or directory"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.211095874Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f99acf74779d9fe04db4ade6a05946b772d16234a57a15a0e47ec7c504fe1084/merged/etc/group: no such file or directory"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.211466718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.25061663Z" level=info msg="Created container c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3: kube-system/storage-provisioner/storage-provisioner" id=d34b51ff-8d90-4376-ae06-9a3ae9c0b210 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.251721186Z" level=info msg="Starting container: c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3" id=e81af74e-1278-40f7-a4dd-710b3d2ee1ab name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.2565809Z" level=info msg="Started container" PID=1631 containerID=c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3 description=kube-system/storage-provisioner/storage-provisioner id=e81af74e-1278-40f7-a4dd-710b3d2ee1ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=e37e515a354fa7eea877e9d5689a53e75d7e6932df35cfef75d0296e90609f1b
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.673362602Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.680002417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.680165242Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.680250199Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.685007585Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.685048546Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.685071643Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.691443927Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.6914814Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.6915017Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.709204599Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.709238593Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.709261798Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.716962517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.717004856Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c7087c8eaba44       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago      Running             storage-provisioner         2                   e37e515a354fa       storage-provisioner                          kube-system
	d024886bd481f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   7b3c27b4017d7       dashboard-metrics-scraper-6ffb444bf9-txkm8   kubernetes-dashboard
	1b419ba60a935       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago      Running             kubernetes-dashboard        0                   736653037f5de       kubernetes-dashboard-855c9754f9-n6g7x        kubernetes-dashboard
	71767ab10cbc0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   d6922c4ad3f75       busybox                                      default
	c2cce03668294       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   512317ab015b8       coredns-66bc5c9577-s7p9w                     kube-system
	6fb0dc993fbe2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   1ea679ebda0a8       kindnet-qnmmf                                kube-system
	b38d516da8a6e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago      Exited              storage-provisioner         1                   e37e515a354fa       storage-provisioner                          kube-system
	a22b83973f57d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   599d815732723       kube-proxy-tlh2v                             kube-system
	4146bebcfc78f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   f19b1b5e5067a       kube-apiserver-no-preload-198717             kube-system
	f3772f41e725d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   e7a06bf3f553f       kube-scheduler-no-preload-198717             kube-system
	9d24638b6e39f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   e56c6b4defe9b       kube-controller-manager-no-preload-198717    kube-system
	41bc0ffb4ace7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   b4d7cf56dffd3       etcd-no-preload-198717                       kube-system
	
	
	==> coredns [c2cce036682945a016dcf6de8c5b63c2797f50b4a3bde0c30d10229ce295a9df] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57846 - 55679 "HINFO IN 1180668652487777901.6623066087083184299. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023039879s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-198717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-198717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=no-preload-198717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_59_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:59:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-198717
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:01:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:01:02 +0000   Sat, 01 Nov 2025 11:59:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:01:02 +0000   Sat, 01 Nov 2025 11:59:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:01:02 +0000   Sat, 01 Nov 2025 11:59:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 12:01:02 +0000   Sat, 01 Nov 2025 11:59:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-198717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                f8c2bafd-3783-4a3a-8c96-56d9871a2cad
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-s7p9w                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-no-preload-198717                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-qnmmf                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-198717              250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-198717     200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-tlh2v                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-198717              100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-txkm8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-n6g7x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 107s               kube-proxy       
	  Normal   Starting                 50s                kube-proxy       
	  Normal   NodeHasSufficientPID     114s               kubelet          Node no-preload-198717 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 114s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  114s               kubelet          Node no-preload-198717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    114s               kubelet          Node no-preload-198717 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 114s               kubelet          Starting kubelet.
	  Normal   RegisteredNode           110s               node-controller  Node no-preload-198717 event: Registered Node no-preload-198717 in Controller
	  Normal   NodeReady                93s                kubelet          Node no-preload-198717 status is now: NodeReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 60s)  kubelet          Node no-preload-198717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 60s)  kubelet          Node no-preload-198717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 60s)  kubelet          Node no-preload-198717 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                node-controller  Node no-preload-198717 event: Registered Node no-preload-198717 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [41bc0ffb4ace7b78b5269921a034d897960eed08f17125d3ab8c8df9c3a224fd] <==
	{"level":"warn","ts":"2025-11-01T12:00:28.552320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.597870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.653026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.701662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.726373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.773147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.784167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.833064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.870845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.906304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.954083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.981940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.013407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.054292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.113464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.133360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.211128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.232197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.251685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.286007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.310540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.354318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.362032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.387832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.458348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49950","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:01:24 up  3:43,  0 user,  load average: 4.64, 3.78, 2.94
	Linux no-preload-198717 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6fb0dc993fbe24c714882e89d86ada6b3ba240cf813b4528e24730daf7e3b3d8] <==
	I1101 12:00:32.473360       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:00:32.476748       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 12:00:32.476883       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:00:32.476896       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:00:32.476906       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:00:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:00:32.672485       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:00:32.672514       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:00:32.672522       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:00:32.673342       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 12:01:02.673084       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 12:01:02.673090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 12:01:02.673227       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 12:01:02.673329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 12:01:03.772991       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 12:01:03.773096       1 metrics.go:72] Registering metrics
	I1101 12:01:03.773202       1 controller.go:711] "Syncing nftables rules"
	I1101 12:01:12.673062       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:01:12.673135       1 main.go:301] handling current node
	I1101 12:01:22.677227       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:01:22.677260       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4146bebcfc78fff7e205d15a351a3b9489d9f1d7f2ce428d242490a4a9a214da] <==
	I1101 12:00:31.370078       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 12:00:31.370119       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 12:00:31.370132       1 policy_source.go:240] refreshing policies
	I1101 12:00:31.370165       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 12:00:31.372292       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:00:31.372581       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 12:00:31.372593       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 12:00:31.373142       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 12:00:31.374145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 12:00:31.388043       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 12:00:31.394652       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 12:00:31.394782       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 12:00:31.405577       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 12:00:31.431598       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1101 12:00:31.574352       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 12:00:31.589651       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:00:32.657129       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 12:00:32.885005       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:00:32.977023       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:00:33.002779       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:00:33.274376       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.123.31"}
	I1101 12:00:33.321066       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.58.24"}
	I1101 12:00:34.903816       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 12:00:35.150678       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:00:35.348782       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9d24638b6e39f00dc4f5ad46eade0ee4467aa0d861d222443a6b43a6ccaaf579] <==
	I1101 12:00:34.913895       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 12:00:34.913981       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 12:00:34.914010       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 12:00:34.914037       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 12:00:34.913941       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 12:00:34.919575       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 12:00:34.919960       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:00:34.921895       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 12:00:34.925351       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 12:00:34.925559       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 12:00:34.925823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 12:00:34.925933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 12:00:34.930073       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 12:00:34.939718       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 12:00:34.942116       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 12:00:34.942342       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 12:00:34.942408       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 12:00:34.943271       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:00:34.944505       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 12:00:34.946853       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 12:00:34.949334       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 12:00:34.950578       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 12:00:34.952015       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 12:00:34.956223       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 12:00:34.963618       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [a22b83973f57d185b05c922046586076ab67a6c7b4b442258a7b45e95082a942] <==
	I1101 12:00:33.442028       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:00:33.767995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:00:33.872783       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:00:33.878040       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 12:00:33.881086       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:00:33.953029       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:00:33.953161       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:00:33.960705       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:00:33.962077       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:00:33.962152       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:00:33.969259       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:00:33.969338       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:00:33.969997       1 config.go:200] "Starting service config controller"
	I1101 12:00:33.970041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:00:33.970324       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:00:33.970330       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:00:33.970699       1 config.go:309] "Starting node config controller"
	I1101 12:00:33.970706       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:00:33.970711       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:00:34.070420       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:00:34.070524       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 12:00:34.070629       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f3772f41e725d1af7e862ae449d7118696e53f3be37b8779faa9d26f954875a8] <==
	I1101 12:00:28.105033       1 serving.go:386] Generated self-signed cert in-memory
	I1101 12:00:34.064212       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 12:00:34.064328       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:00:34.069545       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 12:00:34.069976       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 12:00:34.069939       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:00:34.070116       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 12:00:34.070198       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 12:00:34.070121       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:00:34.069956       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:00:34.075847       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:00:34.170420       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:00:34.170530       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 12:00:34.176936       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:00:35 no-preload-198717 kubelet[763]: I1101 12:00:35.504111     763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/65a2878c-af42-4b31-aec1-ec9f78bd70aa-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-txkm8\" (UID: \"65a2878c-af42-4b31-aec1-ec9f78bd70aa\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8"
	Nov 01 12:00:35 no-preload-198717 kubelet[763]: W1101 12:00:35.792010     763 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/crio-7b3c27b4017d7137638b17c2e81be208663608e8c67d1dcba3003f1cb039c5c3 WatchSource:0}: Error finding container 7b3c27b4017d7137638b17c2e81be208663608e8c67d1dcba3003f1cb039c5c3: Status 404 returned error can't find the container with id 7b3c27b4017d7137638b17c2e81be208663608e8c67d1dcba3003f1cb039c5c3
	Nov 01 12:00:35 no-preload-198717 kubelet[763]: W1101 12:00:35.812763     763 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/crio-736653037f5dec2728b40fe7239a1004d9aad438fd191ec87ab6955719e55fed WatchSource:0}: Error finding container 736653037f5dec2728b40fe7239a1004d9aad438fd191ec87ab6955719e55fed: Status 404 returned error can't find the container with id 736653037f5dec2728b40fe7239a1004d9aad438fd191ec87ab6955719e55fed
	Nov 01 12:00:36 no-preload-198717 kubelet[763]: I1101 12:00:36.412113     763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 12:00:40 no-preload-198717 kubelet[763]: I1101 12:00:40.118811     763 scope.go:117] "RemoveContainer" containerID="938a986ef6264ada6f931e206652d92abe8a537626c0529b9a6d8dedae4f7cf1"
	Nov 01 12:00:41 no-preload-198717 kubelet[763]: I1101 12:00:41.123500     763 scope.go:117] "RemoveContainer" containerID="938a986ef6264ada6f931e206652d92abe8a537626c0529b9a6d8dedae4f7cf1"
	Nov 01 12:00:41 no-preload-198717 kubelet[763]: I1101 12:00:41.123844     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:00:41 no-preload-198717 kubelet[763]: E1101 12:00:41.123997     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:00:42 no-preload-198717 kubelet[763]: I1101 12:00:42.129334     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:00:42 no-preload-198717 kubelet[763]: E1101 12:00:42.129505     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:00:45 no-preload-198717 kubelet[763]: I1101 12:00:45.768750     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:00:45 no-preload-198717 kubelet[763]: E1101 12:00:45.770244     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:00:47 no-preload-198717 kubelet[763]: I1101 12:00:47.167258     763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n6g7x" podStartSLOduration=0.904170894 podStartE2EDuration="12.167240695s" podCreationTimestamp="2025-11-01 12:00:35 +0000 UTC" firstStartedPulling="2025-11-01 12:00:35.82455111 +0000 UTC m=+11.149117899" lastFinishedPulling="2025-11-01 12:00:47.08762091 +0000 UTC m=+22.412187700" observedRunningTime="2025-11-01 12:00:47.157946569 +0000 UTC m=+22.482513383" watchObservedRunningTime="2025-11-01 12:00:47.167240695 +0000 UTC m=+22.491807485"
	Nov 01 12:01:00 no-preload-198717 kubelet[763]: I1101 12:01:00.898844     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:01:01 no-preload-198717 kubelet[763]: I1101 12:01:01.181596     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:01:01 no-preload-198717 kubelet[763]: I1101 12:01:01.182168     763 scope.go:117] "RemoveContainer" containerID="d024886bd481f8d502061d838e84fae7dc51337055c12ff4c38953b14cd50712"
	Nov 01 12:01:01 no-preload-198717 kubelet[763]: E1101 12:01:01.182498     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:01:04 no-preload-198717 kubelet[763]: I1101 12:01:04.193882     763 scope.go:117] "RemoveContainer" containerID="b38d516da8a6e0ae3a719ac17f02835460fe309ee364bdff5c0ab79163282caa"
	Nov 01 12:01:05 no-preload-198717 kubelet[763]: I1101 12:01:05.761609     763 scope.go:117] "RemoveContainer" containerID="d024886bd481f8d502061d838e84fae7dc51337055c12ff4c38953b14cd50712"
	Nov 01 12:01:05 no-preload-198717 kubelet[763]: E1101 12:01:05.761964     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:01:18 no-preload-198717 kubelet[763]: I1101 12:01:18.898168     763 scope.go:117] "RemoveContainer" containerID="d024886bd481f8d502061d838e84fae7dc51337055c12ff4c38953b14cd50712"
	Nov 01 12:01:18 no-preload-198717 kubelet[763]: E1101 12:01:18.898339     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:01:21 no-preload-198717 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 12:01:21 no-preload-198717 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 12:01:21 no-preload-198717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1b419ba60a9359ae536898d052ea0d4354e52910cfd2d21aa073abc9c568c354] <==
	2025/11/01 12:00:47 Using namespace: kubernetes-dashboard
	2025/11/01 12:00:47 Using in-cluster config to connect to apiserver
	2025/11/01 12:00:47 Using secret token for csrf signing
	2025/11/01 12:00:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 12:00:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 12:00:47 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 12:00:47 Generating JWE encryption key
	2025/11/01 12:00:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 12:00:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 12:00:47 Initializing JWE encryption key from synchronized object
	2025/11/01 12:00:47 Creating in-cluster Sidecar client
	2025/11/01 12:00:47 Serving insecurely on HTTP port: 9090
	2025/11/01 12:00:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:01:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:00:47 Starting overwatch
	
	
	==> storage-provisioner [b38d516da8a6e0ae3a719ac17f02835460fe309ee364bdff5c0ab79163282caa] <==
	I1101 12:00:33.404575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 12:01:03.407401       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3] <==
	I1101 12:01:04.278000       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 12:01:04.290657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 12:01:04.290778       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 12:01:04.294246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:07.749897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:12.011776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:15.610183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:18.664357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:21.689607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:21.708384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:01:21.708790       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 12:01:21.709000       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-198717_44d86576-024c-4998-8f48-17d69290c56b!
	W1101 12:01:21.725247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:01:21.725165       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a58c8693-3c87-4a71-8fd5-eb11efb6d780", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-198717_44d86576-024c-4998-8f48-17d69290c56b became leader
	W1101 12:01:21.770438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:01:21.825366       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-198717_44d86576-024c-4998-8f48-17d69290c56b!
	W1101 12:01:23.774253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:23.783241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-198717 -n no-preload-198717
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-198717 -n no-preload-198717: exit status 2 (512.905784ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-198717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-198717
helpers_test.go:243: (dbg) docker inspect no-preload-198717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d",
	        "Created": "2025-11-01T11:58:39.349581274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 728838,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:00:18.007283243Z",
	            "FinishedAt": "2025-11-01T12:00:17.165374882Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/hosts",
	        "LogPath": "/var/lib/docker/containers/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d-json.log",
	        "Name": "/no-preload-198717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-198717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-198717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d",
	                "LowerDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/152565da65bb8e2babcb3d05d9c6adec06baee07b5e89f10bc3bca80fd9a00b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-198717",
	                "Source": "/var/lib/docker/volumes/no-preload-198717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-198717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-198717",
	                "name.minikube.sigs.k8s.io": "no-preload-198717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae88c64e9f38197d5472d95bc5b24b273eb4a23d7a09809ca0332f203992011",
	            "SandboxKey": "/var/run/docker/netns/cae88c64e9f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-198717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:16:fb:26:08:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "984f332f5e0d4cc9526af8fdf6f1a1ce27a9c2697f377b762d5103dc82663350",
	                    "EndpointID": "073e3c6e0c62359d9e8e69446ecd21395f2e83e52b29f09e7851fd3ccd40ced0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-198717",
	                        "c52fbb51f4c4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-198717 -n no-preload-198717
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-198717 -n no-preload-198717: exit status 2 (489.01142ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-198717 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-198717 logs -n 25: (1.660872226s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-505831 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-505831    │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ delete  │ -p cert-options-505831                                                                                                                                                                                                                        │ cert-options-505831    │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:55 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:55 UTC │ 01 Nov 25 11:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-952358 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │                     │
	│ stop    │ -p old-k8s-version-952358 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-952358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:57 UTC │
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:58 UTC │
	│ image   │ old-k8s-version-952358 image list --format=json                                                                                                                                                                                               │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ pause   │ -p old-k8s-version-952358 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │                     │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-534694 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ delete  │ -p cert-expiration-534694                                                                                                                                                                                                                     │ cert-expiration-534694 │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 11:59 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p no-preload-198717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p embed-certs-816860 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-816860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860     │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ image   │ no-preload-198717 image list --format=json                                                                                                                                                                                                    │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717      │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:00:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:00:57.612375  731627 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:00:57.612603  731627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:00:57.612617  731627 out.go:374] Setting ErrFile to fd 2...
	I1101 12:00:57.612622  731627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:00:57.612915  731627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:00:57.613466  731627 out.go:368] Setting JSON to false
	I1101 12:00:57.614539  731627 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13407,"bootTime":1761985051,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:00:57.614615  731627 start.go:143] virtualization:  
	I1101 12:00:57.617673  731627 out.go:179] * [embed-certs-816860] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:00:57.621609  731627 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:00:57.621728  731627 notify.go:221] Checking for updates...
	I1101 12:00:57.627658  731627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:00:57.630724  731627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:00:57.634164  731627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:00:57.637104  731627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:00:57.639949  731627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1101 12:00:53.906625  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	W1101 12:00:56.405635  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	I1101 12:00:57.643275  731627 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:00:57.643836  731627 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:00:57.671194  731627 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:00:57.671315  731627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:00:57.734667  731627 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:00:57.725433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:00:57.734778  731627 docker.go:319] overlay module found
	I1101 12:00:57.737832  731627 out.go:179] * Using the docker driver based on existing profile
	I1101 12:00:57.740670  731627 start.go:309] selected driver: docker
	I1101 12:00:57.740689  731627 start.go:930] validating driver "docker" against &{Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:00:57.740784  731627 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:00:57.741534  731627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:00:57.796786  731627 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:00:57.787224576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:00:57.797162  731627 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:00:57.797197  731627 cni.go:84] Creating CNI manager for ""
	I1101 12:00:57.797255  731627 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:00:57.797295  731627 start.go:353] cluster config:
	{Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:00:57.800454  731627 out.go:179] * Starting "embed-certs-816860" primary control-plane node in "embed-certs-816860" cluster
	I1101 12:00:57.803166  731627 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:00:57.806114  731627 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:00:57.808841  731627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:00:57.808901  731627 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:00:57.808914  731627 cache.go:59] Caching tarball of preloaded images
	I1101 12:00:57.808954  731627 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:00:57.809003  731627 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:00:57.809013  731627 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:00:57.809134  731627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/config.json ...
	I1101 12:00:57.828852  731627 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:00:57.828877  731627 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:00:57.828889  731627 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:00:57.829124  731627 start.go:360] acquireMachinesLock for embed-certs-816860: {Name:mkc466573abafda4e2b4a3754427ac01b3fcf9c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:00:57.829212  731627 start.go:364] duration metric: took 59.997µs to acquireMachinesLock for "embed-certs-816860"
	I1101 12:00:57.829236  731627 start.go:96] Skipping create...Using existing machine configuration
	I1101 12:00:57.829248  731627 fix.go:54] fixHost starting: 
	I1101 12:00:57.829521  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:00:57.851217  731627 fix.go:112] recreateIfNeeded on embed-certs-816860: state=Stopped err=<nil>
	W1101 12:00:57.851251  731627 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 12:00:57.854547  731627 out.go:252] * Restarting existing docker container for "embed-certs-816860" ...
	I1101 12:00:57.854657  731627 cli_runner.go:164] Run: docker start embed-certs-816860
	I1101 12:00:58.137233  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:00:58.160804  731627 kic.go:430] container "embed-certs-816860" state is running.
	I1101 12:00:58.161201  731627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-816860
	I1101 12:00:58.187919  731627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/config.json ...
	I1101 12:00:58.188150  731627 machine.go:94] provisionDockerMachine start ...
	I1101 12:00:58.188260  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:00:58.213530  731627 main.go:143] libmachine: Using SSH client type: native
	I1101 12:00:58.214364  731627 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33800 <nil> <nil>}
	I1101 12:00:58.214395  731627 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:00:58.215086  731627 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34348->127.0.0.1:33800: read: connection reset by peer
	I1101 12:01:01.389913  731627 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-816860
	
	I1101 12:01:01.389938  731627 ubuntu.go:182] provisioning hostname "embed-certs-816860"
	I1101 12:01:01.390007  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:01.413359  731627 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:01.413677  731627 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33800 <nil> <nil>}
	I1101 12:01:01.413727  731627 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-816860 && echo "embed-certs-816860" | sudo tee /etc/hostname
	I1101 12:01:01.584891  731627 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-816860
	
	I1101 12:01:01.585016  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:01.604941  731627 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:01.605263  731627 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33800 <nil> <nil>}
	I1101 12:01:01.605286  731627 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-816860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-816860/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-816860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:01:01.767010  731627 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:01:01.767040  731627 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:01:01.767066  731627 ubuntu.go:190] setting up certificates
	I1101 12:01:01.767081  731627 provision.go:84] configureAuth start
	I1101 12:01:01.767147  731627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-816860
	I1101 12:01:01.788108  731627 provision.go:143] copyHostCerts
	I1101 12:01:01.788220  731627 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:01:01.788243  731627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:01:01.788331  731627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:01:01.788444  731627 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:01:01.788457  731627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:01:01.788491  731627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:01:01.788568  731627 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:01:01.788577  731627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:01:01.788605  731627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:01:01.788667  731627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.embed-certs-816860 san=[127.0.0.1 192.168.76.2 embed-certs-816860 localhost minikube]
	I1101 12:01:02.026667  731627 provision.go:177] copyRemoteCerts
	I1101 12:01:02.026737  731627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:01:02.026788  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.048684  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:02.159394  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:01:02.178018  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 12:01:02.199813  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 12:01:02.218417  731627 provision.go:87] duration metric: took 451.312839ms to configureAuth
	I1101 12:01:02.218489  731627 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:01:02.218719  731627 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:01:02.218846  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.238660  731627 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:02.238973  731627 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33800 <nil> <nil>}
	I1101 12:01:02.238996  731627 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:01:02.565843  731627 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:01:02.565879  731627 machine.go:97] duration metric: took 4.377704s to provisionDockerMachine
	I1101 12:01:02.565891  731627 start.go:293] postStartSetup for "embed-certs-816860" (driver="docker")
	I1101 12:01:02.565902  731627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:01:02.565962  731627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:01:02.566016  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.591203  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	W1101 12:00:58.407230  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	W1101 12:01:00.407393  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	W1101 12:01:02.408279  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	I1101 12:01:02.703391  731627 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:01:02.707347  731627 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:01:02.707380  731627 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:01:02.707393  731627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:01:02.707449  731627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:01:02.707530  731627 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:01:02.707642  731627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:01:02.715527  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:01:02.734159  731627 start.go:296] duration metric: took 168.252806ms for postStartSetup
	I1101 12:01:02.734245  731627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:01:02.734288  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.752487  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:02.854843  731627 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:01:02.859786  731627 fix.go:56] duration metric: took 5.030530728s for fixHost
	I1101 12:01:02.859859  731627 start.go:83] releasing machines lock for "embed-certs-816860", held for 5.030633629s
	I1101 12:01:02.859966  731627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-816860
	I1101 12:01:02.876651  731627 ssh_runner.go:195] Run: cat /version.json
	I1101 12:01:02.876705  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.876976  731627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:01:02.877043  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:02.900182  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:02.917776  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:03.109859  731627 ssh_runner.go:195] Run: systemctl --version
	I1101 12:01:03.116332  731627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:01:03.159386  731627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:01:03.163964  731627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:01:03.164089  731627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:01:03.171979  731627 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 12:01:03.172051  731627 start.go:496] detecting cgroup driver to use...
	I1101 12:01:03.172091  731627 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:01:03.172139  731627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:01:03.189836  731627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:01:03.203370  731627 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:01:03.203434  731627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:01:03.219336  731627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:01:03.232879  731627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:01:03.357256  731627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:01:03.503670  731627 docker.go:234] disabling docker service ...
	I1101 12:01:03.503802  731627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:01:03.521638  731627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:01:03.539484  731627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:01:03.671883  731627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:01:03.806812  731627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:01:03.819565  731627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:01:03.836043  731627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:01:03.836153  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.845576  731627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:01:03.845731  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.855646  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.864558  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.873826  731627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:01:03.881966  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.891375  731627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.900269  731627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:03.911357  731627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:01:03.919458  731627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:01:03.927014  731627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:04.062099  731627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:01:04.204231  731627 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:01:04.204300  731627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:01:04.208824  731627 start.go:564] Will wait 60s for crictl version
	I1101 12:01:04.208890  731627 ssh_runner.go:195] Run: which crictl
	I1101 12:01:04.216365  731627 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:01:04.261066  731627 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:01:04.261242  731627 ssh_runner.go:195] Run: crio --version
	I1101 12:01:04.298042  731627 ssh_runner.go:195] Run: crio --version
	I1101 12:01:04.331591  731627 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:01:04.334641  731627 cli_runner.go:164] Run: docker network inspect embed-certs-816860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:01:04.349789  731627 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 12:01:04.354050  731627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:01:04.364105  731627 kubeadm.go:884] updating cluster {Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:01:04.364229  731627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:01:04.364292  731627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:01:04.407954  731627 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:01:04.407978  731627 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:01:04.408142  731627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:01:04.436567  731627 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:01:04.436593  731627 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:01:04.436604  731627 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 12:01:04.436708  731627 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-816860 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:01:04.436794  731627 ssh_runner.go:195] Run: crio config
	I1101 12:01:04.502362  731627 cni.go:84] Creating CNI manager for ""
	I1101 12:01:04.502439  731627 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:01:04.502475  731627 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 12:01:04.502529  731627 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-816860 NodeName:embed-certs-816860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:01:04.502716  731627 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-816860"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:01:04.502834  731627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:01:04.511697  731627 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:01:04.511770  731627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:01:04.519628  731627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 12:01:04.533540  731627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:01:04.546320  731627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 12:01:04.559739  731627 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:01:04.563482  731627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:01:04.573039  731627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:04.690691  731627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:01:04.712559  731627 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860 for IP: 192.168.76.2
	I1101 12:01:04.712589  731627 certs.go:195] generating shared ca certs ...
	I1101 12:01:04.712611  731627 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:04.712784  731627 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:01:04.712852  731627 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:01:04.712865  731627 certs.go:257] generating profile certs ...
	I1101 12:01:04.712969  731627 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/client.key
	I1101 12:01:04.713044  731627 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key.a2d2a5ad
	I1101 12:01:04.713090  731627 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.key
	I1101 12:01:04.713239  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:01:04.713292  731627 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:01:04.713306  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:01:04.713341  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:01:04.713374  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:01:04.713400  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:01:04.713462  731627 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:01:04.714160  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:01:04.738166  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:01:04.759395  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:01:04.782976  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:01:04.804594  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 12:01:04.831193  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:01:04.857572  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:01:04.881643  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/embed-certs-816860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 12:01:04.916879  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:01:04.946379  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:01:04.967560  731627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:01:04.989530  731627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:01:05.008874  731627 ssh_runner.go:195] Run: openssl version
	I1101 12:01:05.016101  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:01:05.025620  731627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:01:05.029807  731627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:01:05.029885  731627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:01:05.079065  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:01:05.087864  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:01:05.097196  731627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:05.101231  731627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:05.101300  731627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:05.142889  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:01:05.151442  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:01:05.160670  731627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:01:05.164940  731627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:01:05.165017  731627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:01:05.208846  731627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:01:05.218763  731627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:01:05.222649  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 12:01:05.264145  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 12:01:05.305838  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 12:01:05.346654  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 12:01:05.408976  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 12:01:05.465420  731627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 12:01:05.532471  731627 kubeadm.go:401] StartCluster: {Name:embed-certs-816860 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-816860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:01:05.532617  731627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:01:05.532717  731627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:01:05.599135  731627 cri.go:89] found id: "4416bc807f95ffaf24502c304ef9bf5001bd9ddd88301f4d6ef400ff3ea5432f"
	I1101 12:01:05.599195  731627 cri.go:89] found id: "39845a318c12b6c98d99ddf6ea6186a7059c3166814d00af6cd36c5405b346ee"
	I1101 12:01:05.599225  731627 cri.go:89] found id: "a5482a73b20973808dd11c20a8e8b069545e2025ad3b9520ef1f963f7620528c"
	I1101 12:01:05.599253  731627 cri.go:89] found id: "4db70ce1adcd4501c22be41653a3f58f27a96d77e7f80060e3212521fb73acd6"
	I1101 12:01:05.599271  731627 cri.go:89] found id: ""
	I1101 12:01:05.599350  731627 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 12:01:05.629968  731627 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:01:05Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:01:05.630105  731627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:01:05.647043  731627 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 12:01:05.647113  731627 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 12:01:05.647194  731627 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 12:01:05.659709  731627 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 12:01:05.660386  731627 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-816860" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:01:05.660711  731627 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-816860" cluster setting kubeconfig missing "embed-certs-816860" context setting]
	I1101 12:01:05.661323  731627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:05.663073  731627 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 12:01:05.679995  731627 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 12:01:05.680082  731627 kubeadm.go:602] duration metric: took 32.936097ms to restartPrimaryControlPlane
	I1101 12:01:05.680107  731627 kubeadm.go:403] duration metric: took 147.645467ms to StartCluster
	I1101 12:01:05.680153  731627 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:05.680251  731627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:01:05.682591  731627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:05.682933  731627 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:01:05.683177  731627 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:01:05.683223  731627 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:01:05.683310  731627 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-816860"
	I1101 12:01:05.683330  731627 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-816860"
	W1101 12:01:05.683354  731627 addons.go:248] addon storage-provisioner should already be in state true
	I1101 12:01:05.683387  731627 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 12:01:05.683840  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:01:05.684010  731627 addons.go:70] Setting dashboard=true in profile "embed-certs-816860"
	I1101 12:01:05.684030  731627 addons.go:239] Setting addon dashboard=true in "embed-certs-816860"
	W1101 12:01:05.684037  731627 addons.go:248] addon dashboard should already be in state true
	I1101 12:01:05.684073  731627 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 12:01:05.684472  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:01:05.685862  731627 addons.go:70] Setting default-storageclass=true in profile "embed-certs-816860"
	I1101 12:01:05.685896  731627 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-816860"
	I1101 12:01:05.686204  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:01:05.695845  731627 out.go:179] * Verifying Kubernetes components...
	I1101 12:01:05.699051  731627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:05.740559  731627 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 12:01:05.740630  731627 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 12:01:05.743325  731627 addons.go:239] Setting addon default-storageclass=true in "embed-certs-816860"
	W1101 12:01:05.743348  731627 addons.go:248] addon default-storageclass should already be in state true
	I1101 12:01:05.743372  731627 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 12:01:05.743778  731627 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:01:05.743920  731627 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:01:05.743934  731627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:01:05.743970  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:05.747436  731627 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 12:01:05.750281  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 12:01:05.750303  731627 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 12:01:05.750371  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:05.783620  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:05.810012  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:05.810921  731627 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:01:05.810936  731627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:01:05.810991  731627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:01:05.839501  731627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:01:06.049419  731627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:01:06.061799  731627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:01:06.079546  731627 node_ready.go:35] waiting up to 6m0s for node "embed-certs-816860" to be "Ready" ...
	I1101 12:01:06.144364  731627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:01:06.148422  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 12:01:06.148443  731627 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 12:01:06.231018  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 12:01:06.231039  731627 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 12:01:06.276107  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 12:01:06.276134  731627 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 12:01:06.320442  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 12:01:06.320512  731627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 12:01:06.377493  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 12:01:06.377556  731627 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 12:01:06.412516  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 12:01:06.412641  731627 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 12:01:06.472774  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 12:01:06.472840  731627 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 12:01:06.496604  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 12:01:06.496674  731627 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 12:01:06.516592  731627 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:01:06.516665  731627 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 12:01:06.542945  731627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 12:01:04.914307  728709 pod_ready.go:104] pod "coredns-66bc5c9577-s7p9w" is not "Ready", error: <nil>
	I1101 12:01:06.905479  728709 pod_ready.go:94] pod "coredns-66bc5c9577-s7p9w" is "Ready"
	I1101 12:01:06.905501  728709 pod_ready.go:86] duration metric: took 33.004992858s for pod "coredns-66bc5c9577-s7p9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.912649  728709 pod_ready.go:83] waiting for pod "etcd-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.917374  728709 pod_ready.go:94] pod "etcd-no-preload-198717" is "Ready"
	I1101 12:01:06.917395  728709 pod_ready.go:86] duration metric: took 4.672963ms for pod "etcd-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.924016  728709 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.928133  728709 pod_ready.go:94] pod "kube-apiserver-no-preload-198717" is "Ready"
	I1101 12:01:06.928155  728709 pod_ready.go:86] duration metric: took 4.115647ms for pod "kube-apiserver-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:06.931274  728709 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:07.103523  728709 pod_ready.go:94] pod "kube-controller-manager-no-preload-198717" is "Ready"
	I1101 12:01:07.103598  728709 pod_ready.go:86] duration metric: took 172.304485ms for pod "kube-controller-manager-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:07.303761  728709 pod_ready.go:83] waiting for pod "kube-proxy-tlh2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:07.705641  728709 pod_ready.go:94] pod "kube-proxy-tlh2v" is "Ready"
	I1101 12:01:07.705671  728709 pod_ready.go:86] duration metric: took 401.835688ms for pod "kube-proxy-tlh2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:07.904346  728709 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:08.318740  728709 pod_ready.go:94] pod "kube-scheduler-no-preload-198717" is "Ready"
	I1101 12:01:08.318769  728709 pod_ready.go:86] duration metric: took 414.394636ms for pod "kube-scheduler-no-preload-198717" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:08.318781  728709 pod_ready.go:40] duration metric: took 34.423315708s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:01:08.421847  728709 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:01:08.424753  728709 out.go:179] * Done! kubectl is now configured to use "no-preload-198717" cluster and "default" namespace by default
	I1101 12:01:11.767283  731627 node_ready.go:49] node "embed-certs-816860" is "Ready"
	I1101 12:01:11.767312  731627 node_ready.go:38] duration metric: took 5.687689685s for node "embed-certs-816860" to be "Ready" ...
	I1101 12:01:11.767327  731627 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:01:11.767389  731627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:01:13.502388  731627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.440510551s)
	I1101 12:01:13.502465  731627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.358082852s)
	I1101 12:01:13.502824  731627 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.959796265s)
	I1101 12:01:13.503101  731627 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.73570021s)
	I1101 12:01:13.503137  731627 api_server.go:72] duration metric: took 7.820139847s to wait for apiserver process to appear ...
	I1101 12:01:13.503145  731627 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:01:13.503159  731627 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 12:01:13.506774  731627 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-816860 addons enable metrics-server
	
	I1101 12:01:13.518608  731627 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 12:01:13.520279  731627 api_server.go:141] control plane version: v1.34.1
	I1101 12:01:13.520305  731627 api_server.go:131] duration metric: took 17.15374ms to wait for apiserver health ...
	I1101 12:01:13.520315  731627 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:01:13.524985  731627 system_pods.go:59] 8 kube-system pods found
	I1101 12:01:13.525033  731627 system_pods.go:61] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:01:13.525047  731627 system_pods.go:61] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:01:13.525054  731627 system_pods.go:61] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:01:13.525067  731627 system_pods.go:61] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:01:13.525074  731627 system_pods.go:61] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:01:13.525079  731627 system_pods.go:61] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:01:13.525092  731627 system_pods.go:61] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:01:13.525098  731627 system_pods.go:61] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Running
	I1101 12:01:13.525104  731627 system_pods.go:74] duration metric: took 4.78366ms to wait for pod list to return data ...
	I1101 12:01:13.525128  731627 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:01:13.532908  731627 default_sa.go:45] found service account: "default"
	I1101 12:01:13.532936  731627 default_sa.go:55] duration metric: took 7.801865ms for default service account to be created ...
	I1101 12:01:13.532949  731627 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 12:01:13.543066  731627 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 12:01:13.545776  731627 addons.go:515] duration metric: took 7.86254325s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 12:01:13.625237  731627 system_pods.go:86] 8 kube-system pods found
	I1101 12:01:13.625279  731627 system_pods.go:89] "coredns-66bc5c9577-4d2b7" [27152cf3-def0-4a5e-baae-3dcead2874e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:01:13.625288  731627 system_pods.go:89] "etcd-embed-certs-816860" [8ba1d0da-c29f-4ba7-9855-801ae8451400] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:01:13.625295  731627 system_pods.go:89] "kindnet-zmkct" [e84bf106-0b04-4eb0-b1a5-fd02fe9447ce] Running
	I1101 12:01:13.625302  731627 system_pods.go:89] "kube-apiserver-embed-certs-816860" [17b922b2-1418-40ad-96e7-083ebadac418] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:01:13.625308  731627 system_pods.go:89] "kube-controller-manager-embed-certs-816860" [9b4e6cda-7c78-4bf5-a0a4-dc87924beeb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:01:13.625318  731627 system_pods.go:89] "kube-proxy-q5757" [105f4e25-c2c1-40ce-9ca4-b9327682eb0a] Running
	I1101 12:01:13.625325  731627 system_pods.go:89] "kube-scheduler-embed-certs-816860" [ae7b7580-3c87-4017-8397-05d15844d57c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:01:13.625329  731627 system_pods.go:89] "storage-provisioner" [bb93e4fb-e7b0-49ed-8abb-9842fc9950c6] Running
	I1101 12:01:13.625336  731627 system_pods.go:126] duration metric: took 92.38187ms to wait for k8s-apps to be running ...
	I1101 12:01:13.625344  731627 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 12:01:13.625400  731627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:01:13.662917  731627 system_svc.go:56] duration metric: took 37.562725ms WaitForService to wait for kubelet
	I1101 12:01:13.662955  731627 kubeadm.go:587] duration metric: took 7.979968841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:01:13.662975  731627 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:01:13.666848  731627 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:01:13.666876  731627 node_conditions.go:123] node cpu capacity is 2
	I1101 12:01:13.666889  731627 node_conditions.go:105] duration metric: took 3.908301ms to run NodePressure ...
	I1101 12:01:13.666903  731627 start.go:242] waiting for startup goroutines ...
	I1101 12:01:13.666911  731627 start.go:247] waiting for cluster config update ...
	I1101 12:01:13.666922  731627 start.go:256] writing updated cluster config ...
	I1101 12:01:13.667208  731627 ssh_runner.go:195] Run: rm -f paused
	I1101 12:01:13.671889  731627 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:01:13.676422  731627 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4d2b7" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 12:01:15.690231  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:18.193020  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:20.195295  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.196117826Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1cf66726-4df3-437b-879f-29893fc5a6d8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.197271474Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d34b51ff-8d90-4376-ae06-9a3ae9c0b210 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.197433429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.210857159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.211063028Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f99acf74779d9fe04db4ade6a05946b772d16234a57a15a0e47ec7c504fe1084/merged/etc/passwd: no such file or directory"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.211095874Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f99acf74779d9fe04db4ade6a05946b772d16234a57a15a0e47ec7c504fe1084/merged/etc/group: no such file or directory"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.211466718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.25061663Z" level=info msg="Created container c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3: kube-system/storage-provisioner/storage-provisioner" id=d34b51ff-8d90-4376-ae06-9a3ae9c0b210 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.251721186Z" level=info msg="Starting container: c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3" id=e81af74e-1278-40f7-a4dd-710b3d2ee1ab name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:01:04 no-preload-198717 crio[647]: time="2025-11-01T12:01:04.2565809Z" level=info msg="Started container" PID=1631 containerID=c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3 description=kube-system/storage-provisioner/storage-provisioner id=e81af74e-1278-40f7-a4dd-710b3d2ee1ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=e37e515a354fa7eea877e9d5689a53e75d7e6932df35cfef75d0296e90609f1b
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.673362602Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.680002417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.680165242Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.680250199Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.685007585Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.685048546Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.685071643Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.691443927Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.6914814Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.6915017Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.709204599Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.709238593Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.709261798Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.716962517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:12 no-preload-198717 crio[647]: time="2025-11-01T12:01:12.717004856Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c7087c8eaba44       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago       Running             storage-provisioner         2                   e37e515a354fa       storage-provisioner                          kube-system
	d024886bd481f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   7b3c27b4017d7       dashboard-metrics-scraper-6ffb444bf9-txkm8   kubernetes-dashboard
	1b419ba60a935       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   736653037f5de       kubernetes-dashboard-855c9754f9-n6g7x        kubernetes-dashboard
	71767ab10cbc0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   d6922c4ad3f75       busybox                                      default
	c2cce03668294       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   512317ab015b8       coredns-66bc5c9577-s7p9w                     kube-system
	6fb0dc993fbe2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   1ea679ebda0a8       kindnet-qnmmf                                kube-system
	b38d516da8a6e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   e37e515a354fa       storage-provisioner                          kube-system
	a22b83973f57d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   599d815732723       kube-proxy-tlh2v                             kube-system
	4146bebcfc78f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f19b1b5e5067a       kube-apiserver-no-preload-198717             kube-system
	f3772f41e725d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e7a06bf3f553f       kube-scheduler-no-preload-198717             kube-system
	9d24638b6e39f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   e56c6b4defe9b       kube-controller-manager-no-preload-198717    kube-system
	41bc0ffb4ace7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   b4d7cf56dffd3       etcd-no-preload-198717                       kube-system
	
	
	==> coredns [c2cce036682945a016dcf6de8c5b63c2797f50b4a3bde0c30d10229ce295a9df] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57846 - 55679 "HINFO IN 1180668652487777901.6623066087083184299. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023039879s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-198717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-198717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=no-preload-198717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_59_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:59:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-198717
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:01:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:01:02 +0000   Sat, 01 Nov 2025 11:59:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:01:02 +0000   Sat, 01 Nov 2025 11:59:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:01:02 +0000   Sat, 01 Nov 2025 11:59:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 12:01:02 +0000   Sat, 01 Nov 2025 11:59:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-198717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                f8c2bafd-3783-4a3a-8c96-56d9871a2cad
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-s7p9w                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-no-preload-198717                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-qnmmf                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-198717              250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-198717     200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-tlh2v                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-198717              100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-txkm8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-n6g7x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 110s               kube-proxy       
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientPID     117s               kubelet          Node no-preload-198717 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 117s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  117s               kubelet          Node no-preload-198717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s               kubelet          Node no-preload-198717 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 117s               kubelet          Starting kubelet.
	  Normal   RegisteredNode           113s               node-controller  Node no-preload-198717 event: Registered Node no-preload-198717 in Controller
	  Normal   NodeReady                96s                kubelet          Node no-preload-198717 status is now: NodeReady
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 63s)  kubelet          Node no-preload-198717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 63s)  kubelet          Node no-preload-198717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 63s)  kubelet          Node no-preload-198717 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                node-controller  Node no-preload-198717 event: Registered Node no-preload-198717 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [41bc0ffb4ace7b78b5269921a034d897960eed08f17125d3ab8c8df9c3a224fd] <==
	{"level":"warn","ts":"2025-11-01T12:00:28.552320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.597870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.653026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.701662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.726373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.773147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.784167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.833064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.870845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.906304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.954083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:28.981940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.013407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.054292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.113464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.133360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.211128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.232197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.251685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.286007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.310540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.354318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.362032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.387832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:00:29.458348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49950","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:01:27 up  3:43,  0 user,  load average: 4.75, 3.82, 2.95
	Linux no-preload-198717 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6fb0dc993fbe24c714882e89d86ada6b3ba240cf813b4528e24730daf7e3b3d8] <==
	I1101 12:00:32.473360       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:00:32.476748       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 12:00:32.476883       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:00:32.476896       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:00:32.476906       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:00:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:00:32.672485       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:00:32.672514       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:00:32.672522       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:00:32.673342       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 12:01:02.673084       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 12:01:02.673090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 12:01:02.673227       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 12:01:02.673329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 12:01:03.772991       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 12:01:03.773096       1 metrics.go:72] Registering metrics
	I1101 12:01:03.773202       1 controller.go:711] "Syncing nftables rules"
	I1101 12:01:12.673062       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:01:12.673135       1 main.go:301] handling current node
	I1101 12:01:22.677227       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:01:22.677260       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4146bebcfc78fff7e205d15a351a3b9489d9f1d7f2ce428d242490a4a9a214da] <==
	I1101 12:00:31.370078       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 12:00:31.370119       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 12:00:31.370132       1 policy_source.go:240] refreshing policies
	I1101 12:00:31.370165       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 12:00:31.372292       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:00:31.372581       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 12:00:31.372593       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 12:00:31.373142       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 12:00:31.374145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 12:00:31.388043       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 12:00:31.394652       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 12:00:31.394782       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 12:00:31.405577       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 12:00:31.431598       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1101 12:00:31.574352       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 12:00:31.589651       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:00:32.657129       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 12:00:32.885005       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:00:32.977023       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:00:33.002779       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:00:33.274376       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.123.31"}
	I1101 12:00:33.321066       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.58.24"}
	I1101 12:00:34.903816       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 12:00:35.150678       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:00:35.348782       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9d24638b6e39f00dc4f5ad46eade0ee4467aa0d861d222443a6b43a6ccaaf579] <==
	I1101 12:00:34.913895       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 12:00:34.913981       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 12:00:34.914010       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 12:00:34.914037       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 12:00:34.913941       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 12:00:34.919575       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 12:00:34.919960       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:00:34.921895       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 12:00:34.925351       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 12:00:34.925559       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 12:00:34.925823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 12:00:34.925933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 12:00:34.930073       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 12:00:34.939718       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 12:00:34.942116       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 12:00:34.942342       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 12:00:34.942408       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 12:00:34.943271       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:00:34.944505       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 12:00:34.946853       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 12:00:34.949334       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 12:00:34.950578       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 12:00:34.952015       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 12:00:34.956223       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 12:00:34.963618       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [a22b83973f57d185b05c922046586076ab67a6c7b4b442258a7b45e95082a942] <==
	I1101 12:00:33.442028       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:00:33.767995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:00:33.872783       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:00:33.878040       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 12:00:33.881086       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:00:33.953029       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:00:33.953161       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:00:33.960705       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:00:33.962077       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:00:33.962152       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:00:33.969259       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:00:33.969338       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:00:33.969997       1 config.go:200] "Starting service config controller"
	I1101 12:00:33.970041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:00:33.970324       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:00:33.970330       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:00:33.970699       1 config.go:309] "Starting node config controller"
	I1101 12:00:33.970706       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:00:33.970711       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:00:34.070420       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:00:34.070524       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 12:00:34.070629       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f3772f41e725d1af7e862ae449d7118696e53f3be37b8779faa9d26f954875a8] <==
	I1101 12:00:28.105033       1 serving.go:386] Generated self-signed cert in-memory
	I1101 12:00:34.064212       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 12:00:34.064328       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:00:34.069545       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 12:00:34.069976       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 12:00:34.069939       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:00:34.070116       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 12:00:34.070198       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 12:00:34.070121       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:00:34.069956       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:00:34.075847       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:00:34.170420       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:00:34.170530       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 12:00:34.176936       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:00:35 no-preload-198717 kubelet[763]: I1101 12:00:35.504111     763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/65a2878c-af42-4b31-aec1-ec9f78bd70aa-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-txkm8\" (UID: \"65a2878c-af42-4b31-aec1-ec9f78bd70aa\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8"
	Nov 01 12:00:35 no-preload-198717 kubelet[763]: W1101 12:00:35.792010     763 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/crio-7b3c27b4017d7137638b17c2e81be208663608e8c67d1dcba3003f1cb039c5c3 WatchSource:0}: Error finding container 7b3c27b4017d7137638b17c2e81be208663608e8c67d1dcba3003f1cb039c5c3: Status 404 returned error can't find the container with id 7b3c27b4017d7137638b17c2e81be208663608e8c67d1dcba3003f1cb039c5c3
	Nov 01 12:00:35 no-preload-198717 kubelet[763]: W1101 12:00:35.812763     763 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c52fbb51f4c48961f8fcc6a9e1280ab9144e4153e09bfa64b71c71e95e5acb9d/crio-736653037f5dec2728b40fe7239a1004d9aad438fd191ec87ab6955719e55fed WatchSource:0}: Error finding container 736653037f5dec2728b40fe7239a1004d9aad438fd191ec87ab6955719e55fed: Status 404 returned error can't find the container with id 736653037f5dec2728b40fe7239a1004d9aad438fd191ec87ab6955719e55fed
	Nov 01 12:00:36 no-preload-198717 kubelet[763]: I1101 12:00:36.412113     763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 12:00:40 no-preload-198717 kubelet[763]: I1101 12:00:40.118811     763 scope.go:117] "RemoveContainer" containerID="938a986ef6264ada6f931e206652d92abe8a537626c0529b9a6d8dedae4f7cf1"
	Nov 01 12:00:41 no-preload-198717 kubelet[763]: I1101 12:00:41.123500     763 scope.go:117] "RemoveContainer" containerID="938a986ef6264ada6f931e206652d92abe8a537626c0529b9a6d8dedae4f7cf1"
	Nov 01 12:00:41 no-preload-198717 kubelet[763]: I1101 12:00:41.123844     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:00:41 no-preload-198717 kubelet[763]: E1101 12:00:41.123997     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:00:42 no-preload-198717 kubelet[763]: I1101 12:00:42.129334     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:00:42 no-preload-198717 kubelet[763]: E1101 12:00:42.129505     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:00:45 no-preload-198717 kubelet[763]: I1101 12:00:45.768750     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:00:45 no-preload-198717 kubelet[763]: E1101 12:00:45.770244     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:00:47 no-preload-198717 kubelet[763]: I1101 12:00:47.167258     763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n6g7x" podStartSLOduration=0.904170894 podStartE2EDuration="12.167240695s" podCreationTimestamp="2025-11-01 12:00:35 +0000 UTC" firstStartedPulling="2025-11-01 12:00:35.82455111 +0000 UTC m=+11.149117899" lastFinishedPulling="2025-11-01 12:00:47.08762091 +0000 UTC m=+22.412187700" observedRunningTime="2025-11-01 12:00:47.157946569 +0000 UTC m=+22.482513383" watchObservedRunningTime="2025-11-01 12:00:47.167240695 +0000 UTC m=+22.491807485"
	Nov 01 12:01:00 no-preload-198717 kubelet[763]: I1101 12:01:00.898844     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:01:01 no-preload-198717 kubelet[763]: I1101 12:01:01.181596     763 scope.go:117] "RemoveContainer" containerID="335265e1975ae387128bbd1095b6c8d8fe046e7cef3ae24efcd707b90da86e14"
	Nov 01 12:01:01 no-preload-198717 kubelet[763]: I1101 12:01:01.182168     763 scope.go:117] "RemoveContainer" containerID="d024886bd481f8d502061d838e84fae7dc51337055c12ff4c38953b14cd50712"
	Nov 01 12:01:01 no-preload-198717 kubelet[763]: E1101 12:01:01.182498     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:01:04 no-preload-198717 kubelet[763]: I1101 12:01:04.193882     763 scope.go:117] "RemoveContainer" containerID="b38d516da8a6e0ae3a719ac17f02835460fe309ee364bdff5c0ab79163282caa"
	Nov 01 12:01:05 no-preload-198717 kubelet[763]: I1101 12:01:05.761609     763 scope.go:117] "RemoveContainer" containerID="d024886bd481f8d502061d838e84fae7dc51337055c12ff4c38953b14cd50712"
	Nov 01 12:01:05 no-preload-198717 kubelet[763]: E1101 12:01:05.761964     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:01:18 no-preload-198717 kubelet[763]: I1101 12:01:18.898168     763 scope.go:117] "RemoveContainer" containerID="d024886bd481f8d502061d838e84fae7dc51337055c12ff4c38953b14cd50712"
	Nov 01 12:01:18 no-preload-198717 kubelet[763]: E1101 12:01:18.898339     763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-txkm8_kubernetes-dashboard(65a2878c-af42-4b31-aec1-ec9f78bd70aa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-txkm8" podUID="65a2878c-af42-4b31-aec1-ec9f78bd70aa"
	Nov 01 12:01:21 no-preload-198717 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 12:01:21 no-preload-198717 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 12:01:21 no-preload-198717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1b419ba60a9359ae536898d052ea0d4354e52910cfd2d21aa073abc9c568c354] <==
	2025/11/01 12:00:47 Using namespace: kubernetes-dashboard
	2025/11/01 12:00:47 Using in-cluster config to connect to apiserver
	2025/11/01 12:00:47 Using secret token for csrf signing
	2025/11/01 12:00:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 12:00:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 12:00:47 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 12:00:47 Generating JWE encryption key
	2025/11/01 12:00:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 12:00:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 12:00:47 Initializing JWE encryption key from synchronized object
	2025/11/01 12:00:47 Creating in-cluster Sidecar client
	2025/11/01 12:00:47 Serving insecurely on HTTP port: 9090
	2025/11/01 12:00:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:01:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:00:47 Starting overwatch
	
	
	==> storage-provisioner [b38d516da8a6e0ae3a719ac17f02835460fe309ee364bdff5c0ab79163282caa] <==
	I1101 12:00:33.404575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 12:01:03.407401       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c7087c8eaba448262f8f6af80323e708065f53f6c3439eadbf4351a3e6476aa3] <==
	I1101 12:01:04.278000       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 12:01:04.290657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 12:01:04.290778       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 12:01:04.294246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:07.749897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:12.011776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:15.610183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:18.664357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:21.689607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:21.708384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:01:21.708790       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 12:01:21.709000       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-198717_44d86576-024c-4998-8f48-17d69290c56b!
	W1101 12:01:21.725247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:01:21.725165       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a58c8693-3c87-4a71-8fd5-eb11efb6d780", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-198717_44d86576-024c-4998-8f48-17d69290c56b became leader
	W1101 12:01:21.770438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:01:21.825366       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-198717_44d86576-024c-4998-8f48-17d69290c56b!
	W1101 12:01:23.774253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:23.783241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:25.787061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:25.799431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-198717 -n no-preload-198717
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-198717 -n no-preload-198717: exit status 2 (453.077393ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-198717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-816860 --alsologtostderr -v=1
E1101 12:02:01.365648  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-816860 --alsologtostderr -v=1: exit status 80 (2.405125409s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-816860 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 12:02:01.199920  737409 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:02:01.200128  737409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:01.200158  737409 out.go:374] Setting ErrFile to fd 2...
	I1101 12:02:01.200176  737409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:01.200485  737409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:02:01.200819  737409 out.go:368] Setting JSON to false
	I1101 12:02:01.200875  737409 mustload.go:66] Loading cluster: embed-certs-816860
	I1101 12:02:01.201300  737409 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:02:01.201838  737409 cli_runner.go:164] Run: docker container inspect embed-certs-816860 --format={{.State.Status}}
	I1101 12:02:01.229632  737409 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 12:02:01.230018  737409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:02:01.335749  737409 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 12:02:01.324843716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:02:01.336444  737409 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-816860 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 12:02:01.340097  737409 out.go:179] * Pausing node embed-certs-816860 ... 
	I1101 12:02:01.342960  737409 host.go:66] Checking if "embed-certs-816860" exists ...
	I1101 12:02:01.343284  737409 ssh_runner.go:195] Run: systemctl --version
	I1101 12:02:01.343387  737409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-816860
	I1101 12:02:01.370168  737409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/embed-certs-816860/id_rsa Username:docker}
	I1101 12:02:01.476736  737409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:02:01.494952  737409 pause.go:52] kubelet running: true
	I1101 12:02:01.495031  737409 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:02:01.845930  737409 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:02:01.846028  737409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:02:01.982639  737409 cri.go:89] found id: "fcad0650c375b2e98da25fb4e730f8abde8304d6e156ade08d80be325c528a3f"
	I1101 12:02:01.982685  737409 cri.go:89] found id: "1552d26e4133a90fe2b6dfd704c6114d0b192f06060732c329a84f4146dd2526"
	I1101 12:02:01.982691  737409 cri.go:89] found id: "655b7eefdde39dfe0355c9dcc040eb01ff76cb1f69dfd0ba6016dcf06530398d"
	I1101 12:02:01.982695  737409 cri.go:89] found id: "995b2bf90a8a896d7018e4678ac88c4e1fef036b2b67d4f37acd48d6336f2c6e"
	I1101 12:02:01.982698  737409 cri.go:89] found id: "8033962726d30cb6bc62c8ed294a3ef636f01bb6c7ea4c31fb32722c0160af44"
	I1101 12:02:01.982702  737409 cri.go:89] found id: "4416bc807f95ffaf24502c304ef9bf5001bd9ddd88301f4d6ef400ff3ea5432f"
	I1101 12:02:01.982705  737409 cri.go:89] found id: "39845a318c12b6c98d99ddf6ea6186a7059c3166814d00af6cd36c5405b346ee"
	I1101 12:02:01.982708  737409 cri.go:89] found id: "a5482a73b20973808dd11c20a8e8b069545e2025ad3b9520ef1f963f7620528c"
	I1101 12:02:01.982711  737409 cri.go:89] found id: "4db70ce1adcd4501c22be41653a3f58f27a96d77e7f80060e3212521fb73acd6"
	I1101 12:02:01.982718  737409 cri.go:89] found id: "852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df"
	I1101 12:02:01.982729  737409 cri.go:89] found id: "a75ddb667c05fcab243c095a16373bc468a7c774034e2506e30ef093ccc9ca4d"
	I1101 12:02:01.982735  737409 cri.go:89] found id: ""
	I1101 12:02:01.982790  737409 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:02:02.003377  737409 retry.go:31] will retry after 320.508807ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:02:02Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:02:02.324963  737409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:02:02.345731  737409 pause.go:52] kubelet running: false
	I1101 12:02:02.345841  737409 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:02:02.591607  737409 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:02:02.591708  737409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:02:02.724219  737409 cri.go:89] found id: "fcad0650c375b2e98da25fb4e730f8abde8304d6e156ade08d80be325c528a3f"
	I1101 12:02:02.724286  737409 cri.go:89] found id: "1552d26e4133a90fe2b6dfd704c6114d0b192f06060732c329a84f4146dd2526"
	I1101 12:02:02.724308  737409 cri.go:89] found id: "655b7eefdde39dfe0355c9dcc040eb01ff76cb1f69dfd0ba6016dcf06530398d"
	I1101 12:02:02.724329  737409 cri.go:89] found id: "995b2bf90a8a896d7018e4678ac88c4e1fef036b2b67d4f37acd48d6336f2c6e"
	I1101 12:02:02.724363  737409 cri.go:89] found id: "8033962726d30cb6bc62c8ed294a3ef636f01bb6c7ea4c31fb32722c0160af44"
	I1101 12:02:02.724389  737409 cri.go:89] found id: "4416bc807f95ffaf24502c304ef9bf5001bd9ddd88301f4d6ef400ff3ea5432f"
	I1101 12:02:02.724410  737409 cri.go:89] found id: "39845a318c12b6c98d99ddf6ea6186a7059c3166814d00af6cd36c5405b346ee"
	I1101 12:02:02.724431  737409 cri.go:89] found id: "a5482a73b20973808dd11c20a8e8b069545e2025ad3b9520ef1f963f7620528c"
	I1101 12:02:02.724471  737409 cri.go:89] found id: "4db70ce1adcd4501c22be41653a3f58f27a96d77e7f80060e3212521fb73acd6"
	I1101 12:02:02.724494  737409 cri.go:89] found id: "852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df"
	I1101 12:02:02.724511  737409 cri.go:89] found id: "a75ddb667c05fcab243c095a16373bc468a7c774034e2506e30ef093ccc9ca4d"
	I1101 12:02:02.724530  737409 cri.go:89] found id: ""
	I1101 12:02:02.724612  737409 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:02:02.747005  737409 retry.go:31] will retry after 341.891845ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:02:02Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:02:03.089596  737409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:02:03.108436  737409 pause.go:52] kubelet running: false
	I1101 12:02:03.108498  737409 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:02:03.362507  737409 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:02:03.362586  737409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:02:03.470795  737409 cri.go:89] found id: "fcad0650c375b2e98da25fb4e730f8abde8304d6e156ade08d80be325c528a3f"
	I1101 12:02:03.470815  737409 cri.go:89] found id: "1552d26e4133a90fe2b6dfd704c6114d0b192f06060732c329a84f4146dd2526"
	I1101 12:02:03.470821  737409 cri.go:89] found id: "655b7eefdde39dfe0355c9dcc040eb01ff76cb1f69dfd0ba6016dcf06530398d"
	I1101 12:02:03.470825  737409 cri.go:89] found id: "995b2bf90a8a896d7018e4678ac88c4e1fef036b2b67d4f37acd48d6336f2c6e"
	I1101 12:02:03.470828  737409 cri.go:89] found id: "8033962726d30cb6bc62c8ed294a3ef636f01bb6c7ea4c31fb32722c0160af44"
	I1101 12:02:03.470832  737409 cri.go:89] found id: "4416bc807f95ffaf24502c304ef9bf5001bd9ddd88301f4d6ef400ff3ea5432f"
	I1101 12:02:03.470835  737409 cri.go:89] found id: "39845a318c12b6c98d99ddf6ea6186a7059c3166814d00af6cd36c5405b346ee"
	I1101 12:02:03.470838  737409 cri.go:89] found id: "a5482a73b20973808dd11c20a8e8b069545e2025ad3b9520ef1f963f7620528c"
	I1101 12:02:03.470841  737409 cri.go:89] found id: "4db70ce1adcd4501c22be41653a3f58f27a96d77e7f80060e3212521fb73acd6"
	I1101 12:02:03.470847  737409 cri.go:89] found id: "852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df"
	I1101 12:02:03.470851  737409 cri.go:89] found id: "a75ddb667c05fcab243c095a16373bc468a7c774034e2506e30ef093ccc9ca4d"
	I1101 12:02:03.470853  737409 cri.go:89] found id: ""
	I1101 12:02:03.470903  737409 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:02:03.488809  737409 out.go:203] 
	W1101 12:02:03.491883  737409 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:02:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:02:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 12:02:03.491912  737409 out.go:285] * 
	* 
	W1101 12:02:03.501465  737409 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 12:02:03.505855  737409 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-816860 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-816860
helpers_test.go:243: (dbg) docker inspect embed-certs-816860:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a",
	        "Created": "2025-11-01T11:59:10.098758518Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731754,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:00:57.889731598Z",
	            "FinishedAt": "2025-11-01T12:00:56.844221678Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/hosts",
	        "LogPath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a-json.log",
	        "Name": "/embed-certs-816860",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-816860:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-816860",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a",
	                "LowerDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-816860",
	                "Source": "/var/lib/docker/volumes/embed-certs-816860/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-816860",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-816860",
	                "name.minikube.sigs.k8s.io": "embed-certs-816860",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efd0585f14a77fec021b31a006cd5b3c2a68639411858f92819fd508dff165fc",
	            "SandboxKey": "/var/run/docker/netns/efd0585f14a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-816860": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:2d:6d:83:f7:71",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4c593e124071dc106f0bb655a4bbd20938473ea59778c717ee430f5236bedf71",
	                    "EndpointID": "02b1049b76ce728577410354fe88d9d15f9927d1d9a1ec0e493c954bc3c4afe7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-816860",
	                        "5efd8111d020"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-816860 -n embed-certs-816860
E1101 12:02:03.927576  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-816860 -n embed-certs-816860: exit status 2 (445.343198ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-816860 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-816860 logs -n 25: (1.613225586s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:58 UTC │
	│ image   │ old-k8s-version-952358 image list --format=json                                                                                                                                                                                               │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ pause   │ -p old-k8s-version-952358 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │                     │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-534694       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ delete  │ -p cert-expiration-534694                                                                                                                                                                                                                     │ cert-expiration-534694       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 11:59 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p no-preload-198717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p embed-certs-816860 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-816860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ image   │ no-preload-198717 image list --format=json                                                                                                                                                                                                    │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p disable-driver-mounts-783522                                                                                                                                                                                                               │ disable-driver-mounts-783522 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ image   │ embed-certs-816860 image list --format=json                                                                                                                                                                                                   │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ pause   │ -p embed-certs-816860 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:01:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:01:31.667237  735220 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:01:31.667477  735220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:01:31.667508  735220 out.go:374] Setting ErrFile to fd 2...
	I1101 12:01:31.667547  735220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:01:31.667937  735220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:01:31.668408  735220 out.go:368] Setting JSON to false
	I1101 12:01:31.669567  735220 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13441,"bootTime":1761985051,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:01:31.669664  735220 start.go:143] virtualization:  
	I1101 12:01:31.675844  735220 out.go:179] * [default-k8s-diff-port-772362] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:01:31.679716  735220 notify.go:221] Checking for updates...
	I1101 12:01:31.680599  735220 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:01:31.684571  735220 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:01:31.688279  735220 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:01:31.691357  735220 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:01:31.694476  735220 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:01:31.697675  735220 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 12:01:31.701347  735220 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:01:31.701540  735220 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:01:31.741764  735220 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:01:31.741905  735220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:01:31.809938  735220 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:01:31.800208318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:01:31.810063  735220 docker.go:319] overlay module found
	I1101 12:01:31.813395  735220 out.go:179] * Using the docker driver based on user configuration
	I1101 12:01:31.816303  735220 start.go:309] selected driver: docker
	I1101 12:01:31.816328  735220 start.go:930] validating driver "docker" against <nil>
	I1101 12:01:31.816345  735220 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:01:31.817118  735220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:01:31.875149  735220 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:01:31.866024807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:01:31.875304  735220 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 12:01:31.875535  735220 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:01:31.878530  735220 out.go:179] * Using Docker driver with root privileges
	I1101 12:01:31.881474  735220 cni.go:84] Creating CNI manager for ""
	I1101 12:01:31.881539  735220 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:01:31.881552  735220 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 12:01:31.881628  735220 start.go:353] cluster config:
	{Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:01:31.884798  735220 out.go:179] * Starting "default-k8s-diff-port-772362" primary control-plane node in "default-k8s-diff-port-772362" cluster
	I1101 12:01:31.887678  735220 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:01:31.890582  735220 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:01:31.893386  735220 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:01:31.893448  735220 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:01:31.893461  735220 cache.go:59] Caching tarball of preloaded images
	I1101 12:01:31.893482  735220 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:01:31.893549  735220 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:01:31.893559  735220 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:01:31.893675  735220 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/config.json ...
	I1101 12:01:31.893713  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/config.json: {Name:mkb3b73b8c3e9b3e5943db629e7f5837a3594cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:31.913598  735220 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:01:31.913625  735220 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:01:31.913638  735220 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:01:31.913660  735220 start.go:360] acquireMachinesLock for default-k8s-diff-port-772362: {Name:mk4216e21d2fa88f97e4740f5b50e6f442617f00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:01:31.913797  735220 start.go:364] duration metric: took 116.325µs to acquireMachinesLock for "default-k8s-diff-port-772362"
	I1101 12:01:31.913829  735220 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:01:31.913909  735220 start.go:125] createHost starting for "" (driver="docker")
	W1101 12:01:29.682398  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:31.684361  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	I1101 12:01:31.917371  735220 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 12:01:31.917625  735220 start.go:159] libmachine.API.Create for "default-k8s-diff-port-772362" (driver="docker")
	I1101 12:01:31.917668  735220 client.go:173] LocalClient.Create starting
	I1101 12:01:31.917768  735220 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 12:01:31.917804  735220 main.go:143] libmachine: Decoding PEM data...
	I1101 12:01:31.917824  735220 main.go:143] libmachine: Parsing certificate...
	I1101 12:01:31.917891  735220 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 12:01:31.917920  735220 main.go:143] libmachine: Decoding PEM data...
	I1101 12:01:31.917930  735220 main.go:143] libmachine: Parsing certificate...
	I1101 12:01:31.918295  735220 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 12:01:31.934308  735220 cli_runner.go:211] docker network inspect default-k8s-diff-port-772362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 12:01:31.934414  735220 network_create.go:284] running [docker network inspect default-k8s-diff-port-772362] to gather additional debugging logs...
	I1101 12:01:31.934438  735220 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772362
	W1101 12:01:31.949958  735220 cli_runner.go:211] docker network inspect default-k8s-diff-port-772362 returned with exit code 1
	I1101 12:01:31.949995  735220 network_create.go:287] error running [docker network inspect default-k8s-diff-port-772362]: docker network inspect default-k8s-diff-port-772362: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-772362 not found
	I1101 12:01:31.950009  735220 network_create.go:289] output of [docker network inspect default-k8s-diff-port-772362]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-772362 not found
	
	** /stderr **
	I1101 12:01:31.950170  735220 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:01:31.966524  735220 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fad877b9a6cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:a4:0d:8c:c4:a0} reservation:<nil>}
	I1101 12:01:31.966889  735220 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f319e39f8d0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:35:a5:64:2d:20} reservation:<nil>}
	I1101 12:01:31.967241  735220 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce7deea9bf12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:09:be:7b:bb:7b} reservation:<nil>}
	I1101 12:01:31.967544  735220 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c593e124071 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:f6:22:f3:50:47} reservation:<nil>}
	I1101 12:01:31.967973  735220 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fd110}
	I1101 12:01:31.967996  735220 network_create.go:124] attempt to create docker network default-k8s-diff-port-772362 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 12:01:31.968057  735220 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-772362 default-k8s-diff-port-772362
	I1101 12:01:32.035002  735220 network_create.go:108] docker network default-k8s-diff-port-772362 192.168.85.0/24 created
	I1101 12:01:32.035041  735220 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-772362" container
	I1101 12:01:32.035140  735220 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 12:01:32.052396  735220 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-772362 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-772362 --label created_by.minikube.sigs.k8s.io=true
	I1101 12:01:32.072530  735220 oci.go:103] Successfully created a docker volume default-k8s-diff-port-772362
	I1101 12:01:32.072622  735220 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-772362-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-772362 --entrypoint /usr/bin/test -v default-k8s-diff-port-772362:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 12:01:32.659087  735220 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-772362
	I1101 12:01:32.659130  735220 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:01:32.659149  735220 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 12:01:32.659217  735220 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-772362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 12:01:33.684393  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:36.186294  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	I1101 12:01:37.127505  735220 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-772362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.468242129s)
	I1101 12:01:37.127540  735220 kic.go:203] duration metric: took 4.468386762s to extract preloaded images to volume ...
	W1101 12:01:37.127681  735220 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 12:01:37.127784  735220 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 12:01:37.232415  735220 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-772362 --name default-k8s-diff-port-772362 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-772362 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-772362 --network default-k8s-diff-port-772362 --ip 192.168.85.2 --volume default-k8s-diff-port-772362:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 12:01:37.591428  735220 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Running}}
	I1101 12:01:37.611801  735220 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:01:37.637921  735220 cli_runner.go:164] Run: docker exec default-k8s-diff-port-772362 stat /var/lib/dpkg/alternatives/iptables
	I1101 12:01:37.695118  735220 oci.go:144] the created container "default-k8s-diff-port-772362" has a running status.
	I1101 12:01:37.695149  735220 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa...
	I1101 12:01:38.007573  735220 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 12:01:38.038598  735220 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:01:38.068550  735220 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 12:01:38.068579  735220 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-772362 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 12:01:38.128831  735220 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:01:38.151758  735220 machine.go:94] provisionDockerMachine start ...
	I1101 12:01:38.151869  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:38.180355  735220 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:38.180684  735220 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I1101 12:01:38.180701  735220 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:01:38.181263  735220 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35606->127.0.0.1:33805: read: connection reset by peer
	I1101 12:01:41.329407  735220 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772362
	
	I1101 12:01:41.329434  735220 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-772362"
	I1101 12:01:41.329500  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:41.346380  735220 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:41.346704  735220 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I1101 12:01:41.346721  735220 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-772362 && echo "default-k8s-diff-port-772362" | sudo tee /etc/hostname
	I1101 12:01:41.508154  735220 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772362
	
	I1101 12:01:41.508237  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:41.527482  735220 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:41.527789  735220 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I1101 12:01:41.527813  735220 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-772362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-772362/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-772362' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1101 12:01:38.195069  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:40.682598  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	I1101 12:01:41.683278  735220 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:01:41.683304  735220 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:01:41.683332  735220 ubuntu.go:190] setting up certificates
	I1101 12:01:41.683341  735220 provision.go:84] configureAuth start
	I1101 12:01:41.683399  735220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:01:41.705326  735220 provision.go:143] copyHostCerts
	I1101 12:01:41.705396  735220 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:01:41.705409  735220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:01:41.705489  735220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:01:41.705621  735220 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:01:41.705631  735220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:01:41.705661  735220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:01:41.705917  735220 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:01:41.705929  735220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:01:41.705967  735220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:01:41.706065  735220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-772362 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-772362 localhost minikube]
	I1101 12:01:41.797959  735220 provision.go:177] copyRemoteCerts
	I1101 12:01:41.798034  735220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:01:41.798083  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:41.815218  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:41.921751  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:01:41.940587  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 12:01:41.958650  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 12:01:41.982515  735220 provision.go:87] duration metric: took 299.14775ms to configureAuth
	I1101 12:01:41.982548  735220 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:01:41.982742  735220 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:01:41.982858  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:41.999563  735220 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:41.999873  735220 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I1101 12:01:41.999897  735220 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:01:42.403460  735220 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:01:42.403485  735220 machine.go:97] duration metric: took 4.251702472s to provisionDockerMachine
	I1101 12:01:42.403496  735220 client.go:176] duration metric: took 10.485818595s to LocalClient.Create
	I1101 12:01:42.403507  735220 start.go:167] duration metric: took 10.485883638s to libmachine.API.Create "default-k8s-diff-port-772362"
	I1101 12:01:42.403515  735220 start.go:293] postStartSetup for "default-k8s-diff-port-772362" (driver="docker")
	I1101 12:01:42.403525  735220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:01:42.403590  735220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:01:42.403639  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:42.424387  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:42.532914  735220 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:01:42.536437  735220 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:01:42.536469  735220 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:01:42.536480  735220 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:01:42.536540  735220 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:01:42.536629  735220 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:01:42.536736  735220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:01:42.546824  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:01:42.566588  735220 start.go:296] duration metric: took 163.057746ms for postStartSetup
	I1101 12:01:42.567026  735220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:01:42.585147  735220 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/config.json ...
	I1101 12:01:42.585431  735220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:01:42.585485  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:42.602977  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:42.711279  735220 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:01:42.716253  735220 start.go:128] duration metric: took 10.802328086s to createHost
	I1101 12:01:42.716356  735220 start.go:83] releasing machines lock for "default-k8s-diff-port-772362", held for 10.802542604s
	I1101 12:01:42.716434  735220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:01:42.735084  735220 ssh_runner.go:195] Run: cat /version.json
	I1101 12:01:42.735136  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:42.735137  735220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:01:42.735213  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:42.761855  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:42.762078  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:42.865271  735220 ssh_runner.go:195] Run: systemctl --version
	I1101 12:01:42.959362  735220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:01:42.996403  735220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:01:43.000788  735220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:01:43.000926  735220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:01:43.032455  735220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 12:01:43.032541  735220 start.go:496] detecting cgroup driver to use...
	I1101 12:01:43.032603  735220 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:01:43.032702  735220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:01:43.051624  735220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:01:43.065059  735220 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:01:43.065192  735220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:01:43.084356  735220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:01:43.106872  735220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:01:43.278157  735220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:01:43.418786  735220 docker.go:234] disabling docker service ...
	I1101 12:01:43.418898  735220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:01:43.443724  735220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:01:43.458319  735220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:01:43.592984  735220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:01:43.725035  735220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:01:43.740254  735220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:01:43.755969  735220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:01:43.756034  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.765815  735220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:01:43.765886  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.775751  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.785560  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.795812  735220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:01:43.804593  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.814147  735220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.830210  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.840094  735220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:01:43.848407  735220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:01:43.856620  735220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:43.998798  735220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:01:44.148278  735220 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:01:44.148373  735220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:01:44.152344  735220 start.go:564] Will wait 60s for crictl version
	I1101 12:01:44.152448  735220 ssh_runner.go:195] Run: which crictl
	I1101 12:01:44.156433  735220 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:01:44.198024  735220 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:01:44.198130  735220 ssh_runner.go:195] Run: crio --version
	I1101 12:01:44.225854  735220 ssh_runner.go:195] Run: crio --version
	I1101 12:01:44.259781  735220 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:01:44.262637  735220 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:01:44.281682  735220 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 12:01:44.286283  735220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:01:44.296672  735220 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:01:44.296784  735220 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:01:44.296857  735220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:01:44.336734  735220 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:01:44.336759  735220 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:01:44.336817  735220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:01:44.362593  735220 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:01:44.362618  735220 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:01:44.362626  735220 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1101 12:01:44.362800  735220 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-772362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:01:44.362907  735220 ssh_runner.go:195] Run: crio config
	I1101 12:01:44.428869  735220 cni.go:84] Creating CNI manager for ""
	I1101 12:01:44.428894  735220 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:01:44.428948  735220 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 12:01:44.428981  735220 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-772362 NodeName:default-k8s-diff-port-772362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:01:44.429163  735220 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-772362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:01:44.429272  735220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:01:44.437473  735220 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:01:44.437600  735220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:01:44.445604  735220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 12:01:44.458200  735220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:01:44.474206  735220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 12:01:44.488511  735220 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:01:44.492336  735220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:01:44.502464  735220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:44.622867  735220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:01:44.645673  735220 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362 for IP: 192.168.85.2
	I1101 12:01:44.645723  735220 certs.go:195] generating shared ca certs ...
	I1101 12:01:44.645759  735220 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:44.645953  735220 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:01:44.646038  735220 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:01:44.646053  735220 certs.go:257] generating profile certs ...
	I1101 12:01:44.646123  735220 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.key
	I1101 12:01:44.646153  735220 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt with IP's: []
	I1101 12:01:45.388147  735220 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt ...
	I1101 12:01:45.388188  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt: {Name:mk642b0ab96485622b2bee75a23b76b61d257946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:45.388415  735220 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.key ...
	I1101 12:01:45.388431  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.key: {Name:mk550e1e8af29909b97ebefd02a5fa48e5e0c1d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:45.388545  735220 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429
	I1101 12:01:45.388573  735220 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt.c6086429 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 12:01:45.858681  735220 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt.c6086429 ...
	I1101 12:01:45.858715  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt.c6086429: {Name:mk81716ac1c411fbd0d47ad84bc89768c522a1b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:45.858914  735220 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429 ...
	I1101 12:01:45.858929  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429: {Name:mk06c21a0330e8cd8bbcd192b518491a4334ea84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:45.859023  735220 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt.c6086429 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt
	I1101 12:01:45.859109  735220 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key
	I1101 12:01:45.859176  735220 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key
	I1101 12:01:45.859196  735220 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt with IP's: []
	W1101 12:01:43.210346  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:45.684773  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	I1101 12:01:47.303637  735220 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt ...
	I1101 12:01:47.303670  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt: {Name:mk4036adf16a0b1ae63e1963b4e5899a85756d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:47.303861  735220 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key ...
	I1101 12:01:47.303875  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key: {Name:mk7a9e104a5639475ca72f8eda3d9f81c0e7deac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:47.304063  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:01:47.304119  735220 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:01:47.304135  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:01:47.304161  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:01:47.304195  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:01:47.304222  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:01:47.304269  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:01:47.304847  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:01:47.327264  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:01:47.345203  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:01:47.365577  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:01:47.385185  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 12:01:47.403499  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:01:47.421573  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:01:47.456024  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 12:01:47.485633  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:01:47.508856  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:01:47.529343  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:01:47.549297  735220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:01:47.562617  735220 ssh_runner.go:195] Run: openssl version
	I1101 12:01:47.569058  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:01:47.577829  735220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:47.581909  735220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:47.581983  735220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:47.623022  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:01:47.631743  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:01:47.640160  735220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:01:47.643909  735220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:01:47.643997  735220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:01:47.716157  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:01:47.724858  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:01:47.734866  735220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:01:47.739022  735220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:01:47.739113  735220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:01:47.780896  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:01:47.791456  735220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:01:47.795294  735220 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 12:01:47.795352  735220 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:01:47.795426  735220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:01:47.795486  735220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:01:47.822035  735220 cri.go:89] found id: ""
	I1101 12:01:47.822149  735220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:01:47.830159  735220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 12:01:47.837871  735220 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 12:01:47.838014  735220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 12:01:47.845947  735220 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 12:01:47.845969  735220 kubeadm.go:158] found existing configuration files:
	
	I1101 12:01:47.846023  735220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 12:01:47.854221  735220 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 12:01:47.854310  735220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 12:01:47.862290  735220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 12:01:47.870557  735220 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 12:01:47.870619  735220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 12:01:47.879291  735220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 12:01:47.888410  735220 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 12:01:47.888528  735220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 12:01:47.896239  735220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 12:01:47.904227  735220 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 12:01:47.904325  735220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 12:01:47.911967  735220 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 12:01:47.956806  735220 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 12:01:47.957090  735220 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 12:01:47.982208  735220 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 12:01:47.982306  735220 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 12:01:47.982366  735220 kubeadm.go:319] OS: Linux
	I1101 12:01:47.982434  735220 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 12:01:47.982504  735220 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 12:01:47.982569  735220 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 12:01:47.982640  735220 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 12:01:47.982710  735220 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 12:01:47.982781  735220 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 12:01:47.982847  735220 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 12:01:47.982932  735220 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 12:01:47.983000  735220 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 12:01:48.060470  735220 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 12:01:48.060655  735220 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 12:01:48.060805  735220 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 12:01:48.069825  735220 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 12:01:47.683479  731627 pod_ready.go:94] pod "coredns-66bc5c9577-4d2b7" is "Ready"
	I1101 12:01:47.683506  731627 pod_ready.go:86] duration metric: took 34.007054351s for pod "coredns-66bc5c9577-4d2b7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.686686  731627 pod_ready.go:83] waiting for pod "etcd-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.697962  731627 pod_ready.go:94] pod "etcd-embed-certs-816860" is "Ready"
	I1101 12:01:47.697987  731627 pod_ready.go:86] duration metric: took 11.275616ms for pod "etcd-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.700886  731627 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.706341  731627 pod_ready.go:94] pod "kube-apiserver-embed-certs-816860" is "Ready"
	I1101 12:01:47.706367  731627 pod_ready.go:86] duration metric: took 5.458721ms for pod "kube-apiserver-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.710533  731627 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.880493  731627 pod_ready.go:94] pod "kube-controller-manager-embed-certs-816860" is "Ready"
	I1101 12:01:47.880516  731627 pod_ready.go:86] duration metric: took 169.956173ms for pod "kube-controller-manager-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:48.081680  731627 pod_ready.go:83] waiting for pod "kube-proxy-q5757" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:48.480868  731627 pod_ready.go:94] pod "kube-proxy-q5757" is "Ready"
	I1101 12:01:48.480896  731627 pod_ready.go:86] duration metric: took 399.162556ms for pod "kube-proxy-q5757" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:48.682505  731627 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:49.081028  731627 pod_ready.go:94] pod "kube-scheduler-embed-certs-816860" is "Ready"
	I1101 12:01:49.081054  731627 pod_ready.go:86] duration metric: took 398.457398ms for pod "kube-scheduler-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:49.081067  731627 pod_ready.go:40] duration metric: took 35.409149116s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:01:49.154789  731627 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:01:49.158034  731627 out.go:179] * Done! kubectl is now configured to use "embed-certs-816860" cluster and "default" namespace by default
	I1101 12:01:48.073472  735220 out.go:252]   - Generating certificates and keys ...
	I1101 12:01:48.073650  735220 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 12:01:48.073795  735220 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 12:01:48.238183  735220 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 12:01:48.444055  735220 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 12:01:49.218621  735220 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 12:01:49.595641  735220 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 12:01:51.576514  735220 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 12:01:51.576914  735220 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-772362 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 12:01:52.025748  735220 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 12:01:52.026203  735220 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-772362 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 12:01:52.342334  735220 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 12:01:52.943588  735220 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 12:01:53.838800  735220 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 12:01:53.839140  735220 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 12:01:54.204602  735220 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 12:01:54.538796  735220 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 12:01:55.425363  735220 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 12:01:55.809518  735220 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 12:01:56.183093  735220 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 12:01:56.184268  735220 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 12:01:56.188333  735220 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 12:01:56.191800  735220 out.go:252]   - Booting up control plane ...
	I1101 12:01:56.191916  735220 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 12:01:56.192009  735220 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 12:01:56.192691  735220 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 12:01:56.208918  735220 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 12:01:56.209322  735220 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 12:01:56.217503  735220 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 12:01:56.218065  735220 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 12:01:56.218137  735220 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 12:01:56.362389  735220 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 12:01:56.362522  735220 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 12:01:57.366112  735220 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000958048s
	I1101 12:01:57.367556  735220 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 12:01:57.367653  735220 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1101 12:01:57.367952  735220 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 12:01:57.368050  735220 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 12:02:00.797987  735220 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.429514668s
	
	
	==> CRI-O <==
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.847844342Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.851252017Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.851423933Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.85150194Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.862166283Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.862348398Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.862430598Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.866633795Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.866785756Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.866873659Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.875093843Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.875266399Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.950958974Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dced2a2d-7696-413e-aea5-19d37e72e7f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.956034551Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb959528-1b63-4d84-aa50-72e04848fda9 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.958087312Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn/dashboard-metrics-scraper" id=c37a2d7f-97a3-4a5a-9e81-2d16930d8e0d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.95823019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.986104183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.986668293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.033918782Z" level=info msg="Created container 852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn/dashboard-metrics-scraper" id=c37a2d7f-97a3-4a5a-9e81-2d16930d8e0d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.042821371Z" level=info msg="Starting container: 852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df" id=ef660aeb-fe72-4bf9-a215-3bc7fed012d4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.04508659Z" level=info msg="Started container" PID=1725 containerID=852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn/dashboard-metrics-scraper id=ef660aeb-fe72-4bf9-a215-3bc7fed012d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=17100a87a436de05294e7b6ba338ce38baea0a20696465c4ca1cbbf342343fcb
	Nov 01 12:01:59 embed-certs-816860 conmon[1723]: conmon 852ec0ca430f1d6223ca <ninfo>: container 1725 exited with status 1
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.256778597Z" level=info msg="Removing container: 91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c" id=24885834-a7ee-4bc9-a735-8d80cf840209 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.278646867Z" level=info msg="Error loading conmon cgroup of container 91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c: cgroup deleted" id=24885834-a7ee-4bc9-a735-8d80cf840209 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.294868771Z" level=info msg="Removed container 91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn/dashboard-metrics-scraper" id=24885834-a7ee-4bc9-a735-8d80cf840209 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	852ec0ca430f1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   17100a87a436d       dashboard-metrics-scraper-6ffb444bf9-fvftn   kubernetes-dashboard
	fcad0650c375b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   e148c3544e022       storage-provisioner                          kube-system
	a75ddb667c05f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   8ea3cbd6e69be       kubernetes-dashboard-855c9754f9-2zdqk        kubernetes-dashboard
	1552d26e4133a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   31c5bc3bdff81       coredns-66bc5c9577-4d2b7                     kube-system
	74b1d656e0a82       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   43760d800f4e0       busybox                                      default
	655b7eefdde39       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   e8bd16d798e42       kindnet-zmkct                                kube-system
	995b2bf90a8a8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   e148c3544e022       storage-provisioner                          kube-system
	8033962726d30       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   edb7e3f080b04       kube-proxy-q5757                             kube-system
	4416bc807f95f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   9709043d75717       kube-controller-manager-embed-certs-816860   kube-system
	39845a318c12b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   a8db9626dcb30       etcd-embed-certs-816860                      kube-system
	a5482a73b2097       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   989427b7db5d6       kube-apiserver-embed-certs-816860            kube-system
	4db70ce1adcd4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   2a410102b4fb2       kube-scheduler-embed-certs-816860            kube-system
	
	
	==> coredns [1552d26e4133a90fe2b6dfd704c6114d0b192f06060732c329a84f4146dd2526] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60863 - 61058 "HINFO IN 7615577079431996598.7143154987329697270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013113375s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-816860
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-816860
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=embed-certs-816860
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_59_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-816860
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:01:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:01:42 +0000   Sat, 01 Nov 2025 11:59:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:01:42 +0000   Sat, 01 Nov 2025 11:59:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:01:42 +0000   Sat, 01 Nov 2025 11:59:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 12:01:42 +0000   Sat, 01 Nov 2025 12:00:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-816860
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                2c9a5c97-2e6e-4e74-beca-17c7b3951a1d
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-4d2b7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-embed-certs-816860                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-zmkct                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-816860             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-embed-certs-816860    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-q5757                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-816860             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fvftn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2zdqk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node embed-certs-816860 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node embed-certs-816860 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node embed-certs-816860 event: Registered Node embed-certs-816860 in Controller
	  Normal   NodeReady                98s                    kubelet          Node embed-certs-816860 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 61s)      kubelet          Node embed-certs-816860 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 61s)      kubelet          Node embed-certs-816860 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 61s)      kubelet          Node embed-certs-816860 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node embed-certs-816860 event: Registered Node embed-certs-816860 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	[ +52.263508] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [39845a318c12b6c98d99ddf6ea6186a7059c3166814d00af6cd36c5405b346ee] <==
	{"level":"warn","ts":"2025-11-01T12:01:10.444924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.467090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.486419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.499725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.517539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.542982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.563170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.576363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.595851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.613457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.644202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.646173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.660536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.676161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.690720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.707565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.721998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.736714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.753819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.787623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.809149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.836238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.856529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.865373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.915282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46544","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:02:05 up  3:44,  0 user,  load average: 3.87, 3.69, 2.94
	Linux embed-certs-816860 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [655b7eefdde39dfe0355c9dcc040eb01ff76cb1f69dfd0ba6016dcf06530398d] <==
	I1101 12:01:12.649653       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:01:12.718213       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 12:01:12.718376       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:01:12.718418       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:01:12.718498       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:01:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:01:12.836791       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:01:12.836808       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:01:12.836816       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:01:12.837494       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 12:01:42.836843       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 12:01:42.837086       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 12:01:42.838325       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 12:01:42.838347       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 12:01:44.237290       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 12:01:44.237433       1 metrics.go:72] Registering metrics
	I1101 12:01:44.237541       1 controller.go:711] "Syncing nftables rules"
	I1101 12:01:52.837778       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 12:01:52.837887       1 main.go:301] handling current node
	I1101 12:02:02.837773       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 12:02:02.837847       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5482a73b20973808dd11c20a8e8b069545e2025ad3b9520ef1f963f7620528c] <==
	I1101 12:01:11.900817       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 12:01:11.900881       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 12:01:11.901107       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:01:11.964593       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 12:01:11.964802       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 12:01:11.975396       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 12:01:11.975449       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 12:01:11.975545       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 12:01:11.975638       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 12:01:11.975659       1 aggregator.go:171] initial CRD sync complete...
	I1101 12:01:11.975665       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 12:01:11.975670       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 12:01:11.975675       1 cache.go:39] Caches are synced for autoregister controller
	I1101 12:01:12.041940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1101 12:01:12.111293       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 12:01:12.524433       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:01:13.088032       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 12:01:13.216549       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:01:13.259954       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:01:13.275491       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:01:13.419399       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.180.18"}
	I1101 12:01:13.458219       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.232.164"}
	I1101 12:01:15.516734       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:01:15.617945       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 12:01:15.723280       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4416bc807f95ffaf24502c304ef9bf5001bd9ddd88301f4d6ef400ff3ea5432f] <==
	I1101 12:01:15.159418       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 12:01:15.159954       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 12:01:15.161419       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 12:01:15.164634       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 12:01:15.169994       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:01:15.174298       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 12:01:15.177978       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 12:01:15.182484       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 12:01:15.184900       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 12:01:15.192203       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 12:01:15.202718       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:01:15.209307       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 12:01:15.209418       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 12:01:15.209463       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 12:01:15.209505       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:01:15.209557       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 12:01:15.209612       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 12:01:15.209752       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 12:01:15.209819       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 12:01:15.209964       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 12:01:15.210086       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 12:01:15.211621       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-816860"
	I1101 12:01:15.212015       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 12:01:15.213015       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 12:01:15.232160       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [8033962726d30cb6bc62c8ed294a3ef636f01bb6c7ea4c31fb32722c0160af44] <==
	I1101 12:01:13.118353       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:01:13.274281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:01:13.377435       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:01:13.377478       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 12:01:13.377561       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:01:13.478643       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:01:13.478757       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:01:13.498433       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:01:13.498848       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:01:13.499058       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:01:13.500644       1 config.go:200] "Starting service config controller"
	I1101 12:01:13.500764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:01:13.500821       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:01:13.500977       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:01:13.501031       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:01:13.501061       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:01:13.501819       1 config.go:309] "Starting node config controller"
	I1101 12:01:13.529826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:01:13.529936       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:01:13.601496       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:01:13.601507       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 12:01:13.601538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4db70ce1adcd4501c22be41653a3f58f27a96d77e7f80060e3212521fb73acd6] <==
	I1101 12:01:07.715589       1 serving.go:386] Generated self-signed cert in-memory
	W1101 12:01:11.690411       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 12:01:11.690450       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 12:01:11.690460       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 12:01:11.690467       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 12:01:11.859319       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 12:01:11.864077       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:01:11.918272       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:01:11.918323       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:01:11.919200       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 12:01:11.919278       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 12:01:12.118625       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:01:16 embed-certs-816860 kubelet[777]: W1101 12:01:16.108794     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/crio-8ea3cbd6e69befb0f5a731ce4535f92062b336b52dc9cb99d256b022bb6bca27 WatchSource:0}: Error finding container 8ea3cbd6e69befb0f5a731ce4535f92062b336b52dc9cb99d256b022bb6bca27: Status 404 returned error can't find the container with id 8ea3cbd6e69befb0f5a731ce4535f92062b336b52dc9cb99d256b022bb6bca27
	Nov 01 12:01:17 embed-certs-816860 kubelet[777]: I1101 12:01:17.424241     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 12:01:21 embed-certs-816860 kubelet[777]: I1101 12:01:21.081296     777 scope.go:117] "RemoveContainer" containerID="b8dd777bb4b26762a30ed2b09341374ae1c547663b696928518203baffa1920a"
	Nov 01 12:01:22 embed-certs-816860 kubelet[777]: I1101 12:01:22.090914     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:22 embed-certs-816860 kubelet[777]: E1101 12:01:22.091120     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:22 embed-certs-816860 kubelet[777]: I1101 12:01:22.091312     777 scope.go:117] "RemoveContainer" containerID="b8dd777bb4b26762a30ed2b09341374ae1c547663b696928518203baffa1920a"
	Nov 01 12:01:23 embed-certs-816860 kubelet[777]: I1101 12:01:23.100034     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:23 embed-certs-816860 kubelet[777]: E1101 12:01:23.100188     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:26 embed-certs-816860 kubelet[777]: I1101 12:01:26.046087     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:26 embed-certs-816860 kubelet[777]: E1101 12:01:26.046255     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:36 embed-certs-816860 kubelet[777]: I1101 12:01:36.949378     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:37 embed-certs-816860 kubelet[777]: I1101 12:01:37.143128     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:37 embed-certs-816860 kubelet[777]: I1101 12:01:37.143522     777 scope.go:117] "RemoveContainer" containerID="91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c"
	Nov 01 12:01:37 embed-certs-816860 kubelet[777]: E1101 12:01:37.143818     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:37 embed-certs-816860 kubelet[777]: I1101 12:01:37.171000     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zdqk" podStartSLOduration=10.967065423 podStartE2EDuration="22.170979873s" podCreationTimestamp="2025-11-01 12:01:15 +0000 UTC" firstStartedPulling="2025-11-01 12:01:16.112407125 +0000 UTC m=+11.394158816" lastFinishedPulling="2025-11-01 12:01:27.316321567 +0000 UTC m=+22.598073266" observedRunningTime="2025-11-01 12:01:28.129275953 +0000 UTC m=+23.411027644" watchObservedRunningTime="2025-11-01 12:01:37.170979873 +0000 UTC m=+32.452731564"
	Nov 01 12:01:43 embed-certs-816860 kubelet[777]: I1101 12:01:43.166444     777 scope.go:117] "RemoveContainer" containerID="995b2bf90a8a896d7018e4678ac88c4e1fef036b2b67d4f37acd48d6336f2c6e"
	Nov 01 12:01:46 embed-certs-816860 kubelet[777]: I1101 12:01:46.044902     777 scope.go:117] "RemoveContainer" containerID="91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c"
	Nov 01 12:01:46 embed-certs-816860 kubelet[777]: E1101 12:01:46.045778     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:58 embed-certs-816860 kubelet[777]: I1101 12:01:58.949957     777 scope.go:117] "RemoveContainer" containerID="91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c"
	Nov 01 12:01:59 embed-certs-816860 kubelet[777]: I1101 12:01:59.247649     777 scope.go:117] "RemoveContainer" containerID="91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c"
	Nov 01 12:01:59 embed-certs-816860 kubelet[777]: I1101 12:01:59.247946     777 scope.go:117] "RemoveContainer" containerID="852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df"
	Nov 01 12:01:59 embed-certs-816860 kubelet[777]: E1101 12:01:59.248106     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:02:01 embed-certs-816860 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 12:02:01 embed-certs-816860 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 12:02:01 embed-certs-816860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a75ddb667c05fcab243c095a16373bc468a7c774034e2506e30ef093ccc9ca4d] <==
	2025/11/01 12:01:27 Using namespace: kubernetes-dashboard
	2025/11/01 12:01:27 Using in-cluster config to connect to apiserver
	2025/11/01 12:01:27 Using secret token for csrf signing
	2025/11/01 12:01:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 12:01:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 12:01:27 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 12:01:27 Generating JWE encryption key
	2025/11/01 12:01:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 12:01:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 12:01:28 Initializing JWE encryption key from synchronized object
	2025/11/01 12:01:28 Creating in-cluster Sidecar client
	2025/11/01 12:01:28 Serving insecurely on HTTP port: 9090
	2025/11/01 12:01:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:01:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:01:27 Starting overwatch
	
	
	==> storage-provisioner [995b2bf90a8a896d7018e4678ac88c4e1fef036b2b67d4f37acd48d6336f2c6e] <==
	I1101 12:01:12.666272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 12:01:42.700419       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fcad0650c375b2e98da25fb4e730f8abde8304d6e156ade08d80be325c528a3f] <==
	I1101 12:01:43.256264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 12:01:43.282763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 12:01:43.282887       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 12:01:43.286946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:46.743152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:51.005746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:54.604834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:57.658610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:00.681251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:00.691164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:02:00.691421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 12:02:00.695684       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-816860_006e900b-f871-4ecb-a2e6-6eb004ee17d7!
	W1101 12:02:00.703081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:02:00.707350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51749ed9-875c-4abb-abce-2d05599a8ef5", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-816860_006e900b-f871-4ecb-a2e6-6eb004ee17d7 became leader
	W1101 12:02:00.725970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:02:00.797195       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-816860_006e900b-f871-4ecb-a2e6-6eb004ee17d7!
	W1101 12:02:02.736363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:02.745315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:04.749304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:04.757137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-816860 -n embed-certs-816860
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-816860 -n embed-certs-816860: exit status 2 (421.830887ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-816860 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-816860
helpers_test.go:243: (dbg) docker inspect embed-certs-816860:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a",
	        "Created": "2025-11-01T11:59:10.098758518Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731754,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:00:57.889731598Z",
	            "FinishedAt": "2025-11-01T12:00:56.844221678Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/hosts",
	        "LogPath": "/var/lib/docker/containers/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a-json.log",
	        "Name": "/embed-certs-816860",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-816860:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-816860",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a",
	                "LowerDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02018156408dc07733832e3f64711b2874aac010bd9bf1630de1219604c37afa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-816860",
	                "Source": "/var/lib/docker/volumes/embed-certs-816860/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-816860",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-816860",
	                "name.minikube.sigs.k8s.io": "embed-certs-816860",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efd0585f14a77fec021b31a006cd5b3c2a68639411858f92819fd508dff165fc",
	            "SandboxKey": "/var/run/docker/netns/efd0585f14a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-816860": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:2d:6d:83:f7:71",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4c593e124071dc106f0bb655a4bbd20938473ea59778c717ee430f5236bedf71",
	                    "EndpointID": "02b1049b76ce728577410354fe88d9d15f9927d1d9a1ec0e493c954bc3c4afe7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-816860",
	                        "5efd8111d020"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-816860 -n embed-certs-816860
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-816860 -n embed-certs-816860: exit status 2 (475.98869ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-816860 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-816860 logs -n 25: (1.623511857s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:57 UTC │ 01 Nov 25 11:58 UTC │
	│ image   │ old-k8s-version-952358 image list --format=json                                                                                                                                                                                               │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ pause   │ -p old-k8s-version-952358 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │                     │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-534694       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ delete  │ -p cert-expiration-534694                                                                                                                                                                                                                     │ cert-expiration-534694       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 11:59 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p no-preload-198717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p embed-certs-816860 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-816860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ image   │ no-preload-198717 image list --format=json                                                                                                                                                                                                    │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p disable-driver-mounts-783522                                                                                                                                                                                                               │ disable-driver-mounts-783522 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ image   │ embed-certs-816860 image list --format=json                                                                                                                                                                                                   │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ pause   │ -p embed-certs-816860 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:01:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:01:31.667237  735220 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:01:31.667477  735220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:01:31.667508  735220 out.go:374] Setting ErrFile to fd 2...
	I1101 12:01:31.667547  735220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:01:31.667937  735220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:01:31.668408  735220 out.go:368] Setting JSON to false
	I1101 12:01:31.669567  735220 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13441,"bootTime":1761985051,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:01:31.669664  735220 start.go:143] virtualization:  
	I1101 12:01:31.675844  735220 out.go:179] * [default-k8s-diff-port-772362] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:01:31.679716  735220 notify.go:221] Checking for updates...
	I1101 12:01:31.680599  735220 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:01:31.684571  735220 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:01:31.688279  735220 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:01:31.691357  735220 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:01:31.694476  735220 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:01:31.697675  735220 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 12:01:31.701347  735220 config.go:182] Loaded profile config "embed-certs-816860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:01:31.701540  735220 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:01:31.741764  735220 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:01:31.741905  735220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:01:31.809938  735220 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:01:31.800208318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:01:31.810063  735220 docker.go:319] overlay module found
	I1101 12:01:31.813395  735220 out.go:179] * Using the docker driver based on user configuration
	I1101 12:01:31.816303  735220 start.go:309] selected driver: docker
	I1101 12:01:31.816328  735220 start.go:930] validating driver "docker" against <nil>
	I1101 12:01:31.816345  735220 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:01:31.817118  735220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:01:31.875149  735220 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:01:31.866024807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:01:31.875304  735220 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 12:01:31.875535  735220 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:01:31.878530  735220 out.go:179] * Using Docker driver with root privileges
	I1101 12:01:31.881474  735220 cni.go:84] Creating CNI manager for ""
	I1101 12:01:31.881539  735220 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:01:31.881552  735220 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 12:01:31.881628  735220 start.go:353] cluster config:
	{Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:01:31.884798  735220 out.go:179] * Starting "default-k8s-diff-port-772362" primary control-plane node in "default-k8s-diff-port-772362" cluster
	I1101 12:01:31.887678  735220 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:01:31.890582  735220 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:01:31.893386  735220 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:01:31.893448  735220 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:01:31.893461  735220 cache.go:59] Caching tarball of preloaded images
	I1101 12:01:31.893482  735220 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:01:31.893549  735220 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:01:31.893559  735220 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:01:31.893675  735220 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/config.json ...
	I1101 12:01:31.893713  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/config.json: {Name:mkb3b73b8c3e9b3e5943db629e7f5837a3594cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:31.913598  735220 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:01:31.913625  735220 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:01:31.913638  735220 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:01:31.913660  735220 start.go:360] acquireMachinesLock for default-k8s-diff-port-772362: {Name:mk4216e21d2fa88f97e4740f5b50e6f442617f00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:01:31.913797  735220 start.go:364] duration metric: took 116.325µs to acquireMachinesLock for "default-k8s-diff-port-772362"
	I1101 12:01:31.913829  735220 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:01:31.913909  735220 start.go:125] createHost starting for "" (driver="docker")
	W1101 12:01:29.682398  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:31.684361  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	I1101 12:01:31.917371  735220 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 12:01:31.917625  735220 start.go:159] libmachine.API.Create for "default-k8s-diff-port-772362" (driver="docker")
	I1101 12:01:31.917668  735220 client.go:173] LocalClient.Create starting
	I1101 12:01:31.917768  735220 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 12:01:31.917804  735220 main.go:143] libmachine: Decoding PEM data...
	I1101 12:01:31.917824  735220 main.go:143] libmachine: Parsing certificate...
	I1101 12:01:31.917891  735220 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 12:01:31.917920  735220 main.go:143] libmachine: Decoding PEM data...
	I1101 12:01:31.917930  735220 main.go:143] libmachine: Parsing certificate...
	I1101 12:01:31.918295  735220 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 12:01:31.934308  735220 cli_runner.go:211] docker network inspect default-k8s-diff-port-772362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 12:01:31.934414  735220 network_create.go:284] running [docker network inspect default-k8s-diff-port-772362] to gather additional debugging logs...
	I1101 12:01:31.934438  735220 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772362
	W1101 12:01:31.949958  735220 cli_runner.go:211] docker network inspect default-k8s-diff-port-772362 returned with exit code 1
	I1101 12:01:31.949995  735220 network_create.go:287] error running [docker network inspect default-k8s-diff-port-772362]: docker network inspect default-k8s-diff-port-772362: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-772362 not found
	I1101 12:01:31.950009  735220 network_create.go:289] output of [docker network inspect default-k8s-diff-port-772362]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-772362 not found
	
	** /stderr **
	I1101 12:01:31.950170  735220 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:01:31.966524  735220 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fad877b9a6cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:a4:0d:8c:c4:a0} reservation:<nil>}
	I1101 12:01:31.966889  735220 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f319e39f8d0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:35:a5:64:2d:20} reservation:<nil>}
	I1101 12:01:31.967241  735220 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce7deea9bf12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:09:be:7b:bb:7b} reservation:<nil>}
	I1101 12:01:31.967544  735220 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c593e124071 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:f6:22:f3:50:47} reservation:<nil>}
	I1101 12:01:31.967973  735220 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fd110}
	I1101 12:01:31.967996  735220 network_create.go:124] attempt to create docker network default-k8s-diff-port-772362 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 12:01:31.968057  735220 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-772362 default-k8s-diff-port-772362
	I1101 12:01:32.035002  735220 network_create.go:108] docker network default-k8s-diff-port-772362 192.168.85.0/24 created
	I1101 12:01:32.035041  735220 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-772362" container
	I1101 12:01:32.035140  735220 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 12:01:32.052396  735220 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-772362 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-772362 --label created_by.minikube.sigs.k8s.io=true
	I1101 12:01:32.072530  735220 oci.go:103] Successfully created a docker volume default-k8s-diff-port-772362
	I1101 12:01:32.072622  735220 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-772362-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-772362 --entrypoint /usr/bin/test -v default-k8s-diff-port-772362:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 12:01:32.659087  735220 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-772362
	I1101 12:01:32.659130  735220 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:01:32.659149  735220 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 12:01:32.659217  735220 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-772362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 12:01:33.684393  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:36.186294  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	I1101 12:01:37.127505  735220 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-772362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.468242129s)
	I1101 12:01:37.127540  735220 kic.go:203] duration metric: took 4.468386762s to extract preloaded images to volume ...
	W1101 12:01:37.127681  735220 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 12:01:37.127784  735220 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 12:01:37.232415  735220 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-772362 --name default-k8s-diff-port-772362 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-772362 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-772362 --network default-k8s-diff-port-772362 --ip 192.168.85.2 --volume default-k8s-diff-port-772362:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 12:01:37.591428  735220 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Running}}
	I1101 12:01:37.611801  735220 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:01:37.637921  735220 cli_runner.go:164] Run: docker exec default-k8s-diff-port-772362 stat /var/lib/dpkg/alternatives/iptables
	I1101 12:01:37.695118  735220 oci.go:144] the created container "default-k8s-diff-port-772362" has a running status.
	I1101 12:01:37.695149  735220 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa...
	I1101 12:01:38.007573  735220 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 12:01:38.038598  735220 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:01:38.068550  735220 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 12:01:38.068579  735220 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-772362 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 12:01:38.128831  735220 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:01:38.151758  735220 machine.go:94] provisionDockerMachine start ...
	I1101 12:01:38.151869  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:38.180355  735220 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:38.180684  735220 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I1101 12:01:38.180701  735220 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:01:38.181263  735220 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35606->127.0.0.1:33805: read: connection reset by peer
	I1101 12:01:41.329407  735220 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772362
	
	I1101 12:01:41.329434  735220 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-772362"
	I1101 12:01:41.329500  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:41.346380  735220 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:41.346704  735220 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I1101 12:01:41.346721  735220 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-772362 && echo "default-k8s-diff-port-772362" | sudo tee /etc/hostname
	I1101 12:01:41.508154  735220 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772362
	
	I1101 12:01:41.508237  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:41.527482  735220 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:41.527789  735220 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I1101 12:01:41.527813  735220 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-772362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-772362/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-772362' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1101 12:01:38.195069  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:40.682598  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	I1101 12:01:41.683278  735220 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:01:41.683304  735220 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:01:41.683332  735220 ubuntu.go:190] setting up certificates
	I1101 12:01:41.683341  735220 provision.go:84] configureAuth start
	I1101 12:01:41.683399  735220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:01:41.705326  735220 provision.go:143] copyHostCerts
	I1101 12:01:41.705396  735220 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:01:41.705409  735220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:01:41.705489  735220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:01:41.705621  735220 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:01:41.705631  735220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:01:41.705661  735220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:01:41.705917  735220 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:01:41.705929  735220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:01:41.705967  735220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:01:41.706065  735220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-772362 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-772362 localhost minikube]
	I1101 12:01:41.797959  735220 provision.go:177] copyRemoteCerts
	I1101 12:01:41.798034  735220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:01:41.798083  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:41.815218  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:41.921751  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:01:41.940587  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 12:01:41.958650  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 12:01:41.982515  735220 provision.go:87] duration metric: took 299.14775ms to configureAuth
	I1101 12:01:41.982548  735220 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:01:41.982742  735220 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:01:41.982858  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:41.999563  735220 main.go:143] libmachine: Using SSH client type: native
	I1101 12:01:41.999873  735220 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I1101 12:01:41.999897  735220 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:01:42.403460  735220 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:01:42.403485  735220 machine.go:97] duration metric: took 4.251702472s to provisionDockerMachine
	I1101 12:01:42.403496  735220 client.go:176] duration metric: took 10.485818595s to LocalClient.Create
	I1101 12:01:42.403507  735220 start.go:167] duration metric: took 10.485883638s to libmachine.API.Create "default-k8s-diff-port-772362"
	I1101 12:01:42.403515  735220 start.go:293] postStartSetup for "default-k8s-diff-port-772362" (driver="docker")
	I1101 12:01:42.403525  735220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:01:42.403590  735220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:01:42.403639  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:42.424387  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:42.532914  735220 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:01:42.536437  735220 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:01:42.536469  735220 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:01:42.536480  735220 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:01:42.536540  735220 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:01:42.536629  735220 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:01:42.536736  735220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:01:42.546824  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:01:42.566588  735220 start.go:296] duration metric: took 163.057746ms for postStartSetup
	I1101 12:01:42.567026  735220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:01:42.585147  735220 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/config.json ...
	I1101 12:01:42.585431  735220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:01:42.585485  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:42.602977  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:42.711279  735220 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:01:42.716253  735220 start.go:128] duration metric: took 10.802328086s to createHost
	I1101 12:01:42.716356  735220 start.go:83] releasing machines lock for "default-k8s-diff-port-772362", held for 10.802542604s
	I1101 12:01:42.716434  735220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:01:42.735084  735220 ssh_runner.go:195] Run: cat /version.json
	I1101 12:01:42.735136  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:42.735137  735220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:01:42.735213  735220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:01:42.761855  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:42.762078  735220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:01:42.865271  735220 ssh_runner.go:195] Run: systemctl --version
	I1101 12:01:42.959362  735220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:01:42.996403  735220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:01:43.000788  735220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:01:43.000926  735220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:01:43.032455  735220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 12:01:43.032541  735220 start.go:496] detecting cgroup driver to use...
	I1101 12:01:43.032603  735220 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:01:43.032702  735220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:01:43.051624  735220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:01:43.065059  735220 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:01:43.065192  735220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:01:43.084356  735220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:01:43.106872  735220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:01:43.278157  735220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:01:43.418786  735220 docker.go:234] disabling docker service ...
	I1101 12:01:43.418898  735220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:01:43.443724  735220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:01:43.458319  735220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:01:43.592984  735220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:01:43.725035  735220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:01:43.740254  735220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:01:43.755969  735220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:01:43.756034  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.765815  735220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:01:43.765886  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.775751  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.785560  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.795812  735220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:01:43.804593  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.814147  735220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.830210  735220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:01:43.840094  735220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:01:43.848407  735220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:01:43.856620  735220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:43.998798  735220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:01:44.148278  735220 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:01:44.148373  735220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:01:44.152344  735220 start.go:564] Will wait 60s for crictl version
	I1101 12:01:44.152448  735220 ssh_runner.go:195] Run: which crictl
	I1101 12:01:44.156433  735220 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:01:44.198024  735220 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:01:44.198130  735220 ssh_runner.go:195] Run: crio --version
	I1101 12:01:44.225854  735220 ssh_runner.go:195] Run: crio --version
	I1101 12:01:44.259781  735220 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:01:44.262637  735220 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:01:44.281682  735220 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 12:01:44.286283  735220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:01:44.296672  735220 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:01:44.296784  735220 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:01:44.296857  735220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:01:44.336734  735220 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:01:44.336759  735220 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:01:44.336817  735220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:01:44.362593  735220 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:01:44.362618  735220 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:01:44.362626  735220 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1101 12:01:44.362800  735220 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-772362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:01:44.362907  735220 ssh_runner.go:195] Run: crio config
	I1101 12:01:44.428869  735220 cni.go:84] Creating CNI manager for ""
	I1101 12:01:44.428894  735220 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:01:44.428948  735220 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 12:01:44.428981  735220 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-772362 NodeName:default-k8s-diff-port-772362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:01:44.429163  735220 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-772362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:01:44.429272  735220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:01:44.437473  735220 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:01:44.437600  735220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:01:44.445604  735220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 12:01:44.458200  735220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:01:44.474206  735220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 12:01:44.488511  735220 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:01:44.492336  735220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:01:44.502464  735220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:01:44.622867  735220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:01:44.645673  735220 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362 for IP: 192.168.85.2
	I1101 12:01:44.645723  735220 certs.go:195] generating shared ca certs ...
	I1101 12:01:44.645759  735220 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:44.645953  735220 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:01:44.646038  735220 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:01:44.646053  735220 certs.go:257] generating profile certs ...
	I1101 12:01:44.646123  735220 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.key
	I1101 12:01:44.646153  735220 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt with IP's: []
	I1101 12:01:45.388147  735220 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt ...
	I1101 12:01:45.388188  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt: {Name:mk642b0ab96485622b2bee75a23b76b61d257946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:45.388415  735220 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.key ...
	I1101 12:01:45.388431  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.key: {Name:mk550e1e8af29909b97ebefd02a5fa48e5e0c1d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:45.388545  735220 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429
	I1101 12:01:45.388573  735220 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt.c6086429 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 12:01:45.858681  735220 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt.c6086429 ...
	I1101 12:01:45.858715  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt.c6086429: {Name:mk81716ac1c411fbd0d47ad84bc89768c522a1b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:45.858914  735220 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429 ...
	I1101 12:01:45.858929  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429: {Name:mk06c21a0330e8cd8bbcd192b518491a4334ea84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:45.859023  735220 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt.c6086429 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt
	I1101 12:01:45.859109  735220 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key
	I1101 12:01:45.859176  735220 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key
	I1101 12:01:45.859196  735220 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt with IP's: []
	W1101 12:01:43.210346  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	W1101 12:01:45.684773  731627 pod_ready.go:104] pod "coredns-66bc5c9577-4d2b7" is not "Ready", error: <nil>
	I1101 12:01:47.303637  735220 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt ...
	I1101 12:01:47.303670  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt: {Name:mk4036adf16a0b1ae63e1963b4e5899a85756d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:47.303861  735220 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key ...
	I1101 12:01:47.303875  735220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key: {Name:mk7a9e104a5639475ca72f8eda3d9f81c0e7deac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:01:47.304063  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:01:47.304119  735220 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:01:47.304135  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:01:47.304161  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:01:47.304195  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:01:47.304222  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:01:47.304269  735220 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:01:47.304847  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:01:47.327264  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:01:47.345203  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:01:47.365577  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:01:47.385185  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 12:01:47.403499  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:01:47.421573  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:01:47.456024  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 12:01:47.485633  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:01:47.508856  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:01:47.529343  735220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:01:47.549297  735220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:01:47.562617  735220 ssh_runner.go:195] Run: openssl version
	I1101 12:01:47.569058  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:01:47.577829  735220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:47.581909  735220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:47.581983  735220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:01:47.623022  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:01:47.631743  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:01:47.640160  735220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:01:47.643909  735220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:01:47.643997  735220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:01:47.716157  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:01:47.724858  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:01:47.734866  735220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:01:47.739022  735220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:01:47.739113  735220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:01:47.780896  735220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:01:47.791456  735220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:01:47.795294  735220 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 12:01:47.795352  735220 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:01:47.795426  735220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:01:47.795486  735220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:01:47.822035  735220 cri.go:89] found id: ""
	I1101 12:01:47.822149  735220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:01:47.830159  735220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 12:01:47.837871  735220 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 12:01:47.838014  735220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 12:01:47.845947  735220 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 12:01:47.845969  735220 kubeadm.go:158] found existing configuration files:
	
	I1101 12:01:47.846023  735220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 12:01:47.854221  735220 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 12:01:47.854310  735220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 12:01:47.862290  735220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 12:01:47.870557  735220 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 12:01:47.870619  735220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 12:01:47.879291  735220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 12:01:47.888410  735220 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 12:01:47.888528  735220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 12:01:47.896239  735220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 12:01:47.904227  735220 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 12:01:47.904325  735220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 12:01:47.911967  735220 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 12:01:47.956806  735220 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 12:01:47.957090  735220 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 12:01:47.982208  735220 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 12:01:47.982306  735220 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 12:01:47.982366  735220 kubeadm.go:319] OS: Linux
	I1101 12:01:47.982434  735220 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 12:01:47.982504  735220 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 12:01:47.982569  735220 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 12:01:47.982640  735220 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 12:01:47.982710  735220 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 12:01:47.982781  735220 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 12:01:47.982847  735220 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 12:01:47.982932  735220 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 12:01:47.983000  735220 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 12:01:48.060470  735220 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 12:01:48.060655  735220 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 12:01:48.060805  735220 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 12:01:48.069825  735220 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 12:01:47.683479  731627 pod_ready.go:94] pod "coredns-66bc5c9577-4d2b7" is "Ready"
	I1101 12:01:47.683506  731627 pod_ready.go:86] duration metric: took 34.007054351s for pod "coredns-66bc5c9577-4d2b7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.686686  731627 pod_ready.go:83] waiting for pod "etcd-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.697962  731627 pod_ready.go:94] pod "etcd-embed-certs-816860" is "Ready"
	I1101 12:01:47.697987  731627 pod_ready.go:86] duration metric: took 11.275616ms for pod "etcd-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.700886  731627 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.706341  731627 pod_ready.go:94] pod "kube-apiserver-embed-certs-816860" is "Ready"
	I1101 12:01:47.706367  731627 pod_ready.go:86] duration metric: took 5.458721ms for pod "kube-apiserver-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.710533  731627 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:47.880493  731627 pod_ready.go:94] pod "kube-controller-manager-embed-certs-816860" is "Ready"
	I1101 12:01:47.880516  731627 pod_ready.go:86] duration metric: took 169.956173ms for pod "kube-controller-manager-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:48.081680  731627 pod_ready.go:83] waiting for pod "kube-proxy-q5757" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:48.480868  731627 pod_ready.go:94] pod "kube-proxy-q5757" is "Ready"
	I1101 12:01:48.480896  731627 pod_ready.go:86] duration metric: took 399.162556ms for pod "kube-proxy-q5757" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:48.682505  731627 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:49.081028  731627 pod_ready.go:94] pod "kube-scheduler-embed-certs-816860" is "Ready"
	I1101 12:01:49.081054  731627 pod_ready.go:86] duration metric: took 398.457398ms for pod "kube-scheduler-embed-certs-816860" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:01:49.081067  731627 pod_ready.go:40] duration metric: took 35.409149116s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:01:49.154789  731627 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:01:49.158034  731627 out.go:179] * Done! kubectl is now configured to use "embed-certs-816860" cluster and "default" namespace by default
	I1101 12:01:48.073472  735220 out.go:252]   - Generating certificates and keys ...
	I1101 12:01:48.073650  735220 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 12:01:48.073795  735220 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 12:01:48.238183  735220 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 12:01:48.444055  735220 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 12:01:49.218621  735220 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 12:01:49.595641  735220 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 12:01:51.576514  735220 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 12:01:51.576914  735220 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-772362 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 12:01:52.025748  735220 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 12:01:52.026203  735220 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-772362 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 12:01:52.342334  735220 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 12:01:52.943588  735220 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 12:01:53.838800  735220 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 12:01:53.839140  735220 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 12:01:54.204602  735220 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 12:01:54.538796  735220 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 12:01:55.425363  735220 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 12:01:55.809518  735220 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 12:01:56.183093  735220 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 12:01:56.184268  735220 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 12:01:56.188333  735220 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 12:01:56.191800  735220 out.go:252]   - Booting up control plane ...
	I1101 12:01:56.191916  735220 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 12:01:56.192009  735220 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 12:01:56.192691  735220 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 12:01:56.208918  735220 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 12:01:56.209322  735220 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 12:01:56.217503  735220 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 12:01:56.218065  735220 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 12:01:56.218137  735220 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 12:01:56.362389  735220 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 12:01:56.362522  735220 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 12:01:57.366112  735220 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000958048s
	I1101 12:01:57.367556  735220 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 12:01:57.367653  735220 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1101 12:01:57.367952  735220 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 12:01:57.368050  735220 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 12:02:00.797987  735220 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.429514668s
	I1101 12:02:03.221653  735220 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.854017672s
	I1101 12:02:05.371805  735220 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.003789694s
	I1101 12:02:05.399343  735220 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 12:02:05.424747  735220 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 12:02:05.449670  735220 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 12:02:05.450336  735220 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-772362 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 12:02:05.467854  735220 kubeadm.go:319] [bootstrap-token] Using token: otc3g1.kupgskezn51xabih
	I1101 12:02:05.472108  735220 out.go:252]   - Configuring RBAC rules ...
	I1101 12:02:05.472244  735220 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 12:02:05.477901  735220 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 12:02:05.491290  735220 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 12:02:05.501744  735220 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 12:02:05.507206  735220 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 12:02:05.511972  735220 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 12:02:05.785949  735220 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 12:02:06.282045  735220 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 12:02:06.781549  735220 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 12:02:06.786903  735220 kubeadm.go:319] 
	I1101 12:02:06.787002  735220 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 12:02:06.787009  735220 kubeadm.go:319] 
	I1101 12:02:06.787090  735220 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 12:02:06.787095  735220 kubeadm.go:319] 
	I1101 12:02:06.787121  735220 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 12:02:06.787183  735220 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 12:02:06.787236  735220 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 12:02:06.787241  735220 kubeadm.go:319] 
	I1101 12:02:06.787298  735220 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 12:02:06.787302  735220 kubeadm.go:319] 
	I1101 12:02:06.787352  735220 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 12:02:06.787356  735220 kubeadm.go:319] 
	I1101 12:02:06.787411  735220 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 12:02:06.787490  735220 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 12:02:06.787562  735220 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 12:02:06.787566  735220 kubeadm.go:319] 
	I1101 12:02:06.787655  735220 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 12:02:06.787736  735220 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 12:02:06.787740  735220 kubeadm.go:319] 
	I1101 12:02:06.787829  735220 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token otc3g1.kupgskezn51xabih \
	I1101 12:02:06.787938  735220 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 12:02:06.787960  735220 kubeadm.go:319] 	--control-plane 
	I1101 12:02:06.787964  735220 kubeadm.go:319] 
	I1101 12:02:06.788053  735220 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 12:02:06.788058  735220 kubeadm.go:319] 
	I1101 12:02:06.788144  735220 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token otc3g1.kupgskezn51xabih \
	I1101 12:02:06.788252  735220 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 12:02:06.790622  735220 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 12:02:06.790879  735220 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 12:02:06.790997  735220 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 12:02:06.791072  735220 cni.go:84] Creating CNI manager for ""
	I1101 12:02:06.791088  735220 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:02:06.794223  735220 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.847844342Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.851252017Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.851423933Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.85150194Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.862166283Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.862348398Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.862430598Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.866633795Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.866785756Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.866873659Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.875093843Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:01:52 embed-certs-816860 crio[653]: time="2025-11-01T12:01:52.875266399Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.950958974Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dced2a2d-7696-413e-aea5-19d37e72e7f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.956034551Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb959528-1b63-4d84-aa50-72e04848fda9 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.958087312Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn/dashboard-metrics-scraper" id=c37a2d7f-97a3-4a5a-9e81-2d16930d8e0d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.95823019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.986104183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:58 embed-certs-816860 crio[653]: time="2025-11-01T12:01:58.986668293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.033918782Z" level=info msg="Created container 852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn/dashboard-metrics-scraper" id=c37a2d7f-97a3-4a5a-9e81-2d16930d8e0d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.042821371Z" level=info msg="Starting container: 852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df" id=ef660aeb-fe72-4bf9-a215-3bc7fed012d4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.04508659Z" level=info msg="Started container" PID=1725 containerID=852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn/dashboard-metrics-scraper id=ef660aeb-fe72-4bf9-a215-3bc7fed012d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=17100a87a436de05294e7b6ba338ce38baea0a20696465c4ca1cbbf342343fcb
	Nov 01 12:01:59 embed-certs-816860 conmon[1723]: conmon 852ec0ca430f1d6223ca <ninfo>: container 1725 exited with status 1
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.256778597Z" level=info msg="Removing container: 91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c" id=24885834-a7ee-4bc9-a735-8d80cf840209 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.278646867Z" level=info msg="Error loading conmon cgroup of container 91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c: cgroup deleted" id=24885834-a7ee-4bc9-a735-8d80cf840209 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:01:59 embed-certs-816860 crio[653]: time="2025-11-01T12:01:59.294868771Z" level=info msg="Removed container 91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn/dashboard-metrics-scraper" id=24885834-a7ee-4bc9-a735-8d80cf840209 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	852ec0ca430f1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   17100a87a436d       dashboard-metrics-scraper-6ffb444bf9-fvftn   kubernetes-dashboard
	fcad0650c375b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   e148c3544e022       storage-provisioner                          kube-system
	a75ddb667c05f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   8ea3cbd6e69be       kubernetes-dashboard-855c9754f9-2zdqk        kubernetes-dashboard
	1552d26e4133a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   31c5bc3bdff81       coredns-66bc5c9577-4d2b7                     kube-system
	74b1d656e0a82       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   43760d800f4e0       busybox                                      default
	655b7eefdde39       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   e8bd16d798e42       kindnet-zmkct                                kube-system
	995b2bf90a8a8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   e148c3544e022       storage-provisioner                          kube-system
	8033962726d30       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   edb7e3f080b04       kube-proxy-q5757                             kube-system
	4416bc807f95f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9709043d75717       kube-controller-manager-embed-certs-816860   kube-system
	39845a318c12b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a8db9626dcb30       etcd-embed-certs-816860                      kube-system
	a5482a73b2097       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   989427b7db5d6       kube-apiserver-embed-certs-816860            kube-system
	4db70ce1adcd4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   2a410102b4fb2       kube-scheduler-embed-certs-816860            kube-system
	
	
	==> coredns [1552d26e4133a90fe2b6dfd704c6114d0b192f06060732c329a84f4146dd2526] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60863 - 61058 "HINFO IN 7615577079431996598.7143154987329697270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013113375s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-816860
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-816860
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=embed-certs-816860
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_59_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-816860
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:01:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:01:42 +0000   Sat, 01 Nov 2025 11:59:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:01:42 +0000   Sat, 01 Nov 2025 11:59:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:01:42 +0000   Sat, 01 Nov 2025 11:59:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 12:01:42 +0000   Sat, 01 Nov 2025 12:00:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-816860
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                2c9a5c97-2e6e-4e74-beca-17c7b3951a1d
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-4d2b7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-embed-certs-816860                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-zmkct                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-embed-certs-816860             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-embed-certs-816860    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-q5757                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-embed-certs-816860             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fvftn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2zdqk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   Starting                 2m39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node embed-certs-816860 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node embed-certs-816860 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node embed-certs-816860 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node embed-certs-816860 event: Registered Node embed-certs-816860 in Controller
	  Normal   NodeReady                100s                   kubelet          Node embed-certs-816860 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 63s)      kubelet          Node embed-certs-816860 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 63s)      kubelet          Node embed-certs-816860 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 63s)      kubelet          Node embed-certs-816860 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node embed-certs-816860 event: Registered Node embed-certs-816860 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	[ +52.263508] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [39845a318c12b6c98d99ddf6ea6186a7059c3166814d00af6cd36c5405b346ee] <==
	{"level":"warn","ts":"2025-11-01T12:01:10.444924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.467090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.486419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.499725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.517539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.542982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.563170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.576363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.595851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.613457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.644202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.646173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.660536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.676161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.690720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.707565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.721998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.736714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.753819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.787623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.809149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.836238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.856529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.865373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:01:10.915282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46544","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:02:07 up  3:44,  0 user,  load average: 4.04, 3.72, 2.96
	Linux embed-certs-816860 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [655b7eefdde39dfe0355c9dcc040eb01ff76cb1f69dfd0ba6016dcf06530398d] <==
	I1101 12:01:12.649653       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:01:12.718213       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 12:01:12.718376       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:01:12.718418       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:01:12.718498       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:01:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:01:12.836791       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:01:12.836808       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:01:12.836816       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:01:12.837494       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 12:01:42.836843       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 12:01:42.837086       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 12:01:42.838325       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 12:01:42.838347       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 12:01:44.237290       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 12:01:44.237433       1 metrics.go:72] Registering metrics
	I1101 12:01:44.237541       1 controller.go:711] "Syncing nftables rules"
	I1101 12:01:52.837778       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 12:01:52.837887       1 main.go:301] handling current node
	I1101 12:02:02.837773       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 12:02:02.837847       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5482a73b20973808dd11c20a8e8b069545e2025ad3b9520ef1f963f7620528c] <==
	I1101 12:01:11.900817       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 12:01:11.900881       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 12:01:11.901107       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:01:11.964593       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 12:01:11.964802       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 12:01:11.975396       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 12:01:11.975449       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 12:01:11.975545       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 12:01:11.975638       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 12:01:11.975659       1 aggregator.go:171] initial CRD sync complete...
	I1101 12:01:11.975665       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 12:01:11.975670       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 12:01:11.975675       1 cache.go:39] Caches are synced for autoregister controller
	I1101 12:01:12.041940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1101 12:01:12.111293       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 12:01:12.524433       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:01:13.088032       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 12:01:13.216549       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:01:13.259954       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:01:13.275491       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:01:13.419399       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.180.18"}
	I1101 12:01:13.458219       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.232.164"}
	I1101 12:01:15.516734       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:01:15.617945       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 12:01:15.723280       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4416bc807f95ffaf24502c304ef9bf5001bd9ddd88301f4d6ef400ff3ea5432f] <==
	I1101 12:01:15.159418       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 12:01:15.159954       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 12:01:15.161419       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 12:01:15.164634       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 12:01:15.169994       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:01:15.174298       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 12:01:15.177978       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 12:01:15.182484       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 12:01:15.184900       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 12:01:15.192203       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 12:01:15.202718       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:01:15.209307       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 12:01:15.209418       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 12:01:15.209463       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 12:01:15.209505       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:01:15.209557       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 12:01:15.209612       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 12:01:15.209752       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 12:01:15.209819       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 12:01:15.209964       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 12:01:15.210086       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 12:01:15.211621       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-816860"
	I1101 12:01:15.212015       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 12:01:15.213015       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 12:01:15.232160       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [8033962726d30cb6bc62c8ed294a3ef636f01bb6c7ea4c31fb32722c0160af44] <==
	I1101 12:01:13.118353       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:01:13.274281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:01:13.377435       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:01:13.377478       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 12:01:13.377561       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:01:13.478643       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:01:13.478757       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:01:13.498433       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:01:13.498848       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:01:13.499058       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:01:13.500644       1 config.go:200] "Starting service config controller"
	I1101 12:01:13.500764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:01:13.500821       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:01:13.500977       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:01:13.501031       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:01:13.501061       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:01:13.501819       1 config.go:309] "Starting node config controller"
	I1101 12:01:13.529826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:01:13.529936       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:01:13.601496       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:01:13.601507       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 12:01:13.601538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4db70ce1adcd4501c22be41653a3f58f27a96d77e7f80060e3212521fb73acd6] <==
	I1101 12:01:07.715589       1 serving.go:386] Generated self-signed cert in-memory
	W1101 12:01:11.690411       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 12:01:11.690450       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 12:01:11.690460       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 12:01:11.690467       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 12:01:11.859319       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 12:01:11.864077       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:01:11.918272       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:01:11.918323       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:01:11.919200       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 12:01:11.919278       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 12:01:12.118625       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:01:16 embed-certs-816860 kubelet[777]: W1101 12:01:16.108794     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5efd8111d020e58fb48165308a97fb45fa97705fa23393bac126ff327776fa1a/crio-8ea3cbd6e69befb0f5a731ce4535f92062b336b52dc9cb99d256b022bb6bca27 WatchSource:0}: Error finding container 8ea3cbd6e69befb0f5a731ce4535f92062b336b52dc9cb99d256b022bb6bca27: Status 404 returned error can't find the container with id 8ea3cbd6e69befb0f5a731ce4535f92062b336b52dc9cb99d256b022bb6bca27
	Nov 01 12:01:17 embed-certs-816860 kubelet[777]: I1101 12:01:17.424241     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 12:01:21 embed-certs-816860 kubelet[777]: I1101 12:01:21.081296     777 scope.go:117] "RemoveContainer" containerID="b8dd777bb4b26762a30ed2b09341374ae1c547663b696928518203baffa1920a"
	Nov 01 12:01:22 embed-certs-816860 kubelet[777]: I1101 12:01:22.090914     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:22 embed-certs-816860 kubelet[777]: E1101 12:01:22.091120     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:22 embed-certs-816860 kubelet[777]: I1101 12:01:22.091312     777 scope.go:117] "RemoveContainer" containerID="b8dd777bb4b26762a30ed2b09341374ae1c547663b696928518203baffa1920a"
	Nov 01 12:01:23 embed-certs-816860 kubelet[777]: I1101 12:01:23.100034     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:23 embed-certs-816860 kubelet[777]: E1101 12:01:23.100188     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:26 embed-certs-816860 kubelet[777]: I1101 12:01:26.046087     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:26 embed-certs-816860 kubelet[777]: E1101 12:01:26.046255     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:36 embed-certs-816860 kubelet[777]: I1101 12:01:36.949378     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:37 embed-certs-816860 kubelet[777]: I1101 12:01:37.143128     777 scope.go:117] "RemoveContainer" containerID="23735306f8127df392934c71ff7df48511b0da3d62fd842cd5dfc3b800ec1c19"
	Nov 01 12:01:37 embed-certs-816860 kubelet[777]: I1101 12:01:37.143522     777 scope.go:117] "RemoveContainer" containerID="91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c"
	Nov 01 12:01:37 embed-certs-816860 kubelet[777]: E1101 12:01:37.143818     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:37 embed-certs-816860 kubelet[777]: I1101 12:01:37.171000     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zdqk" podStartSLOduration=10.967065423 podStartE2EDuration="22.170979873s" podCreationTimestamp="2025-11-01 12:01:15 +0000 UTC" firstStartedPulling="2025-11-01 12:01:16.112407125 +0000 UTC m=+11.394158816" lastFinishedPulling="2025-11-01 12:01:27.316321567 +0000 UTC m=+22.598073266" observedRunningTime="2025-11-01 12:01:28.129275953 +0000 UTC m=+23.411027644" watchObservedRunningTime="2025-11-01 12:01:37.170979873 +0000 UTC m=+32.452731564"
	Nov 01 12:01:43 embed-certs-816860 kubelet[777]: I1101 12:01:43.166444     777 scope.go:117] "RemoveContainer" containerID="995b2bf90a8a896d7018e4678ac88c4e1fef036b2b67d4f37acd48d6336f2c6e"
	Nov 01 12:01:46 embed-certs-816860 kubelet[777]: I1101 12:01:46.044902     777 scope.go:117] "RemoveContainer" containerID="91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c"
	Nov 01 12:01:46 embed-certs-816860 kubelet[777]: E1101 12:01:46.045778     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:01:58 embed-certs-816860 kubelet[777]: I1101 12:01:58.949957     777 scope.go:117] "RemoveContainer" containerID="91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c"
	Nov 01 12:01:59 embed-certs-816860 kubelet[777]: I1101 12:01:59.247649     777 scope.go:117] "RemoveContainer" containerID="91a0455f3d496b98cb5ddf508af4db33c4fc56d4a1b3ff9fc3194b887a8bdd7c"
	Nov 01 12:01:59 embed-certs-816860 kubelet[777]: I1101 12:01:59.247946     777 scope.go:117] "RemoveContainer" containerID="852ec0ca430f1d6223cae3bedd66958a1a78c8ccc226985261372a691f5ca0df"
	Nov 01 12:01:59 embed-certs-816860 kubelet[777]: E1101 12:01:59.248106     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fvftn_kubernetes-dashboard(081505f3-49f9-45fb-bf00-e0f8344c2d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fvftn" podUID="081505f3-49f9-45fb-bf00-e0f8344c2d53"
	Nov 01 12:02:01 embed-certs-816860 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 12:02:01 embed-certs-816860 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 12:02:01 embed-certs-816860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a75ddb667c05fcab243c095a16373bc468a7c774034e2506e30ef093ccc9ca4d] <==
	2025/11/01 12:01:27 Using namespace: kubernetes-dashboard
	2025/11/01 12:01:27 Using in-cluster config to connect to apiserver
	2025/11/01 12:01:27 Using secret token for csrf signing
	2025/11/01 12:01:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 12:01:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 12:01:27 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 12:01:27 Generating JWE encryption key
	2025/11/01 12:01:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 12:01:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 12:01:28 Initializing JWE encryption key from synchronized object
	2025/11/01 12:01:28 Creating in-cluster Sidecar client
	2025/11/01 12:01:28 Serving insecurely on HTTP port: 9090
	2025/11/01 12:01:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:01:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:01:27 Starting overwatch
	
	
	==> storage-provisioner [995b2bf90a8a896d7018e4678ac88c4e1fef036b2b67d4f37acd48d6336f2c6e] <==
	I1101 12:01:12.666272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 12:01:42.700419       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fcad0650c375b2e98da25fb4e730f8abde8304d6e156ade08d80be325c528a3f] <==
	I1101 12:01:43.256264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 12:01:43.282763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 12:01:43.282887       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 12:01:43.286946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:46.743152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:51.005746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:54.604834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:01:57.658610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:00.681251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:00.691164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:02:00.691421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 12:02:00.695684       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-816860_006e900b-f871-4ecb-a2e6-6eb004ee17d7!
	W1101 12:02:00.703081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:02:00.707350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51749ed9-875c-4abb-abce-2d05599a8ef5", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-816860_006e900b-f871-4ecb-a2e6-6eb004ee17d7 became leader
	W1101 12:02:00.725970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:02:00.797195       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-816860_006e900b-f871-4ecb-a2e6-6eb004ee17d7!
	W1101 12:02:02.736363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:02.745315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:04.749304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:04.757137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:06.761831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:06.768252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-816860 -n embed-certs-816860
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-816860 -n embed-certs-816860: exit status 2 (429.083214ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-816860 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-915456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-915456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (317.544371ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:02:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-915456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-915456
helpers_test.go:243: (dbg) docker inspect newest-cni-915456:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e",
	        "Created": "2025-11-01T12:02:18.412307635Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 739563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:02:18.468908358Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/hostname",
	        "HostsPath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/hosts",
	        "LogPath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e-json.log",
	        "Name": "/newest-cni-915456",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-915456:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-915456",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e",
	                "LowerDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309/merged",
	                "UpperDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309/diff",
	                "WorkDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-915456",
	                "Source": "/var/lib/docker/volumes/newest-cni-915456/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-915456",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-915456",
	                "name.minikube.sigs.k8s.io": "newest-cni-915456",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3963dca3db0f8113024cb8d5f8e9e2143e95e9c39f85de94e5406f69ddaf1937",
	            "SandboxKey": "/var/run/docker/netns/3963dca3db0f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-915456": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:eb:b6:8f:1e:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "10431394969d1cfa6501e0e03a4192e5aff1f9a8f6a90ca624ff65c125c75830",
	                    "EndpointID": "8007937105d74e74e7545b4466f4ac71605d66435cec4906be02334a69eaa6af",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-915456",
	                        "888185dcceae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-915456 -n newest-cni-915456
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-915456 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-915456 logs -n 25: (1.162224589s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-534694 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-534694       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ delete  │ -p old-k8s-version-952358                                                                                                                                                                                                                     │ old-k8s-version-952358       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:58 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ delete  │ -p cert-expiration-534694                                                                                                                                                                                                                     │ cert-expiration-534694       │ jenkins │ v1.37.0 │ 01 Nov 25 11:58 UTC │ 01 Nov 25 11:59 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 11:59 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p no-preload-198717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p embed-certs-816860 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-816860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ image   │ no-preload-198717 image list --format=json                                                                                                                                                                                                    │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p disable-driver-mounts-783522                                                                                                                                                                                                               │ disable-driver-mounts-783522 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ image   │ embed-certs-816860 image list --format=json                                                                                                                                                                                                   │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ pause   │ -p embed-certs-816860 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-915456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:02:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:02:12.565766  739030 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:02:12.566373  739030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:12.566388  739030 out.go:374] Setting ErrFile to fd 2...
	I1101 12:02:12.566394  739030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:12.566725  739030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:02:12.567169  739030 out.go:368] Setting JSON to false
	I1101 12:02:12.568199  739030 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13482,"bootTime":1761985051,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:02:12.568266  739030 start.go:143] virtualization:  
	I1101 12:02:12.572323  739030 out.go:179] * [newest-cni-915456] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:02:12.575613  739030 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:02:12.575657  739030 notify.go:221] Checking for updates...
	I1101 12:02:12.581771  739030 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:02:12.584729  739030 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:02:12.588407  739030 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:02:12.591447  739030 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:02:12.594308  739030 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 12:02:12.598157  739030 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:02:12.598292  739030 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:02:12.641508  739030 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:02:12.641642  739030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:02:12.782947  739030 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:02:12.769941279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:02:12.783062  739030 docker.go:319] overlay module found
	I1101 12:02:12.786162  739030 out.go:179] * Using the docker driver based on user configuration
	I1101 12:02:12.789002  739030 start.go:309] selected driver: docker
	I1101 12:02:12.789023  739030 start.go:930] validating driver "docker" against <nil>
	I1101 12:02:12.789037  739030 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:02:12.789853  739030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:02:12.905658  739030 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:02:12.89434502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:02:12.905836  739030 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 12:02:12.905861  739030 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 12:02:12.906112  739030 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 12:02:12.909535  739030 out.go:179] * Using Docker driver with root privileges
	I1101 12:02:12.912446  739030 cni.go:84] Creating CNI manager for ""
	I1101 12:02:12.912526  739030 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:02:12.912535  739030 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 12:02:12.912627  739030 start.go:353] cluster config:
	{Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:02:12.915820  739030 out.go:179] * Starting "newest-cni-915456" primary control-plane node in "newest-cni-915456" cluster
	I1101 12:02:12.918623  739030 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:02:12.921594  739030 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:02:12.924544  739030 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:02:12.924610  739030 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:02:12.924621  739030 cache.go:59] Caching tarball of preloaded images
	I1101 12:02:12.924708  739030 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:02:12.924717  739030 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:02:12.924837  739030 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/config.json ...
	I1101 12:02:12.924856  739030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/config.json: {Name:mkf13b012d8a0d5618bb9337abce72caa80b78c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:12.925030  739030 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:02:12.951219  739030 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:02:12.951238  739030 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:02:12.951251  739030 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:02:12.951273  739030 start.go:360] acquireMachinesLock for newest-cni-915456: {Name:mkb1ddd4203c8257583d515453d1119aaa07ce06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:02:12.951374  739030 start.go:364] duration metric: took 85.014µs to acquireMachinesLock for "newest-cni-915456"
	I1101 12:02:12.951399  739030 start.go:93] Provisioning new machine with config: &{Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:02:12.951469  739030 start.go:125] createHost starting for "" (driver="docker")
	I1101 12:02:12.172865  735220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:02:12.265138  735220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:02:12.265370  735220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 12:02:12.346510  735220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:02:13.477076  735220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.304178952s)
	I1101 12:02:13.477146  735220 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.211988666s)
	I1101 12:02:13.477175  735220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.130642786s)
	I1101 12:02:13.477957  735220 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-772362" to be "Ready" ...
	I1101 12:02:13.477142  735220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.211729347s)
	I1101 12:02:13.478132  735220 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 12:02:13.586593  735220 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 12:02:13.589605  735220 addons.go:515] duration metric: took 2.191486089s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 12:02:13.985529  735220 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-772362" context rescaled to 1 replicas
	W1101 12:02:15.481829  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:12.955395  739030 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 12:02:12.955666  739030 start.go:159] libmachine.API.Create for "newest-cni-915456" (driver="docker")
	I1101 12:02:12.955697  739030 client.go:173] LocalClient.Create starting
	I1101 12:02:12.955769  739030 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 12:02:12.955806  739030 main.go:143] libmachine: Decoding PEM data...
	I1101 12:02:12.955823  739030 main.go:143] libmachine: Parsing certificate...
	I1101 12:02:12.955883  739030 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 12:02:12.955901  739030 main.go:143] libmachine: Decoding PEM data...
	I1101 12:02:12.955911  739030 main.go:143] libmachine: Parsing certificate...
	I1101 12:02:12.956290  739030 cli_runner.go:164] Run: docker network inspect newest-cni-915456 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 12:02:12.984968  739030 cli_runner.go:211] docker network inspect newest-cni-915456 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 12:02:12.985050  739030 network_create.go:284] running [docker network inspect newest-cni-915456] to gather additional debugging logs...
	I1101 12:02:12.985068  739030 cli_runner.go:164] Run: docker network inspect newest-cni-915456
	W1101 12:02:13.011310  739030 cli_runner.go:211] docker network inspect newest-cni-915456 returned with exit code 1
	I1101 12:02:13.011350  739030 network_create.go:287] error running [docker network inspect newest-cni-915456]: docker network inspect newest-cni-915456: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-915456 not found
	I1101 12:02:13.011366  739030 network_create.go:289] output of [docker network inspect newest-cni-915456]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-915456 not found
	
	** /stderr **
	I1101 12:02:13.011464  739030 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:02:13.031577  739030 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fad877b9a6cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:a4:0d:8c:c4:a0} reservation:<nil>}
	I1101 12:02:13.032061  739030 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f319e39f8d0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:35:a5:64:2d:20} reservation:<nil>}
	I1101 12:02:13.032438  739030 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce7deea9bf12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:09:be:7b:bb:7b} reservation:<nil>}
	I1101 12:02:13.033005  739030 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05500}
	I1101 12:02:13.033028  739030 network_create.go:124] attempt to create docker network newest-cni-915456 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 12:02:13.033081  739030 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-915456 newest-cni-915456
	I1101 12:02:13.116164  739030 network_create.go:108] docker network newest-cni-915456 192.168.76.0/24 created
	I1101 12:02:13.116199  739030 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-915456" container
	I1101 12:02:13.116292  739030 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 12:02:13.146110  739030 cli_runner.go:164] Run: docker volume create newest-cni-915456 --label name.minikube.sigs.k8s.io=newest-cni-915456 --label created_by.minikube.sigs.k8s.io=true
	I1101 12:02:13.176917  739030 oci.go:103] Successfully created a docker volume newest-cni-915456
	I1101 12:02:13.177026  739030 cli_runner.go:164] Run: docker run --rm --name newest-cni-915456-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-915456 --entrypoint /usr/bin/test -v newest-cni-915456:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 12:02:13.858486  739030 oci.go:107] Successfully prepared a docker volume newest-cni-915456
	I1101 12:02:13.858531  739030 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:02:13.858551  739030 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 12:02:13.858623  739030 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-915456:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 12:02:17.481943  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	W1101 12:02:19.981543  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:18.333404  739030 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-915456:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.47473837s)
	I1101 12:02:18.333437  739030 kic.go:203] duration metric: took 4.474882511s to extract preloaded images to volume ...
	W1101 12:02:18.333576  739030 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 12:02:18.333730  739030 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 12:02:18.396619  739030 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-915456 --name newest-cni-915456 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-915456 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-915456 --network newest-cni-915456 --ip 192.168.76.2 --volume newest-cni-915456:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 12:02:18.729557  739030 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Running}}
	I1101 12:02:18.751220  739030 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:18.780435  739030 cli_runner.go:164] Run: docker exec newest-cni-915456 stat /var/lib/dpkg/alternatives/iptables
	I1101 12:02:18.838483  739030 oci.go:144] the created container "newest-cni-915456" has a running status.
	I1101 12:02:18.838512  739030 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa...
	I1101 12:02:18.996523  739030 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 12:02:19.031858  739030 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:19.065422  739030 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 12:02:19.065447  739030 kic_runner.go:114] Args: [docker exec --privileged newest-cni-915456 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 12:02:19.126793  739030 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:19.163547  739030 machine.go:94] provisionDockerMachine start ...
	I1101 12:02:19.163657  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:19.192688  739030 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:19.193042  739030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I1101 12:02:19.193051  739030 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:02:19.193837  739030 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34976->127.0.0.1:33810: read: connection reset by peer
	I1101 12:02:22.341328  739030 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-915456
	
	I1101 12:02:22.341352  739030 ubuntu.go:182] provisioning hostname "newest-cni-915456"
	I1101 12:02:22.341423  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:22.359059  739030 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:22.359360  739030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I1101 12:02:22.359378  739030 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-915456 && echo "newest-cni-915456" | sudo tee /etc/hostname
	I1101 12:02:22.519701  739030 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-915456
	
	I1101 12:02:22.519791  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:22.537649  739030 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:22.538049  739030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I1101 12:02:22.538073  739030 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-915456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-915456/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-915456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:02:22.690524  739030 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:02:22.690553  739030 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:02:22.690573  739030 ubuntu.go:190] setting up certificates
	I1101 12:02:22.690584  739030 provision.go:84] configureAuth start
	I1101 12:02:22.690654  739030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:02:22.708615  739030 provision.go:143] copyHostCerts
	I1101 12:02:22.708681  739030 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:02:22.708690  739030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:02:22.708770  739030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:02:22.708859  739030 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:02:22.708864  739030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:02:22.708889  739030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:02:22.708938  739030 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:02:22.708943  739030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:02:22.708967  739030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:02:22.709019  739030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.newest-cni-915456 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-915456]
	I1101 12:02:23.265963  739030 provision.go:177] copyRemoteCerts
	I1101 12:02:23.266032  739030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:02:23.266084  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:23.284028  739030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:02:23.389961  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:02:23.413923  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 12:02:23.433302  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 12:02:23.453030  739030 provision.go:87] duration metric: took 762.423038ms to configureAuth
	I1101 12:02:23.453066  739030 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:02:23.453322  739030 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:02:23.453434  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:23.471065  739030 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:23.471378  739030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I1101 12:02:23.471397  739030 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:02:23.747698  739030 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:02:23.747722  739030 machine.go:97] duration metric: took 4.584156205s to provisionDockerMachine
	I1101 12:02:23.747732  739030 client.go:176] duration metric: took 10.792029001s to LocalClient.Create
	I1101 12:02:23.747744  739030 start.go:167] duration metric: took 10.79208076s to libmachine.API.Create "newest-cni-915456"
	I1101 12:02:23.747751  739030 start.go:293] postStartSetup for "newest-cni-915456" (driver="docker")
	I1101 12:02:23.747762  739030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:02:23.747825  739030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:02:23.747874  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:23.767110  739030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:02:23.874183  739030 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:02:23.877614  739030 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:02:23.877641  739030 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:02:23.877656  739030 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:02:23.877722  739030 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:02:23.877804  739030 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:02:23.877902  739030 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:02:23.885743  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:02:23.905791  739030 start.go:296] duration metric: took 158.024992ms for postStartSetup
	I1101 12:02:23.906166  739030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:02:23.923769  739030 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/config.json ...
	I1101 12:02:23.924081  739030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:02:23.924136  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:23.941637  739030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:02:24.043282  739030 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:02:24.048428  739030 start.go:128] duration metric: took 11.096943219s to createHost
	I1101 12:02:24.048457  739030 start.go:83] releasing machines lock for "newest-cni-915456", held for 11.097074461s
	I1101 12:02:24.048545  739030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:02:24.065645  739030 ssh_runner.go:195] Run: cat /version.json
	I1101 12:02:24.065733  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:24.066014  739030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:02:24.066071  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:24.085368  739030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:02:24.090097  739030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:02:24.275321  739030 ssh_runner.go:195] Run: systemctl --version
	I1101 12:02:24.282228  739030 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:02:24.320290  739030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:02:24.324958  739030 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:02:24.325030  739030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:02:24.355026  739030 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 12:02:24.355091  739030 start.go:496] detecting cgroup driver to use...
	I1101 12:02:24.355128  739030 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:02:24.355180  739030 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:02:24.372829  739030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:02:24.385310  739030 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:02:24.385410  739030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:02:24.403099  739030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:02:24.422351  739030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:02:24.563581  739030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:02:24.703479  739030 docker.go:234] disabling docker service ...
	I1101 12:02:24.703594  739030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:02:24.725671  739030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:02:24.739548  739030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:02:24.866004  739030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:02:24.999065  739030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:02:25.015430  739030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:02:25.030958  739030 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:02:25.031068  739030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:02:25.041313  739030 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:02:25.041387  739030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:02:25.050779  739030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:02:25.060881  739030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:02:25.074411  739030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:02:25.083224  739030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:02:25.092891  739030 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:02:25.108057  739030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:02:25.117485  739030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:02:25.125855  739030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:02:25.133667  739030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:02:25.261735  739030 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:02:25.395269  739030 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:02:25.395402  739030 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:02:25.399753  739030 start.go:564] Will wait 60s for crictl version
	I1101 12:02:25.399869  739030 ssh_runner.go:195] Run: which crictl
	I1101 12:02:25.404802  739030 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:02:25.429859  739030 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:02:25.429999  739030 ssh_runner.go:195] Run: crio --version
	I1101 12:02:25.463418  739030 ssh_runner.go:195] Run: crio --version
	I1101 12:02:25.502966  739030 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:02:25.505929  739030 cli_runner.go:164] Run: docker network inspect newest-cni-915456 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:02:25.522947  739030 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 12:02:25.527023  739030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:02:25.540689  739030 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1101 12:02:22.481561  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	W1101 12:02:24.981452  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:25.543648  739030 kubeadm.go:884] updating cluster {Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:02:25.543817  739030 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:02:25.543912  739030 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:02:25.593918  739030 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:02:25.593945  739030 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:02:25.594010  739030 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:02:25.621088  739030 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:02:25.621122  739030 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:02:25.621130  739030 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 12:02:25.621223  739030 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-915456 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:02:25.621311  739030 ssh_runner.go:195] Run: crio config
	I1101 12:02:25.685507  739030 cni.go:84] Creating CNI manager for ""
	I1101 12:02:25.685531  739030 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:02:25.685548  739030 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 12:02:25.685596  739030 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-915456 NodeName:newest-cni-915456 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:02:25.685802  739030 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-915456"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:02:25.685902  739030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:02:25.694627  739030 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:02:25.694701  739030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:02:25.703746  739030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 12:02:25.717179  739030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:02:25.733092  739030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 12:02:25.747076  739030 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:02:25.750940  739030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:02:25.761808  739030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:02:25.871135  739030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:02:25.888137  739030 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456 for IP: 192.168.76.2
	I1101 12:02:25.888159  739030 certs.go:195] generating shared ca certs ...
	I1101 12:02:25.888177  739030 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:25.888378  739030 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:02:25.888436  739030 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:02:25.888447  739030 certs.go:257] generating profile certs ...
	I1101 12:02:25.888516  739030 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/client.key
	I1101 12:02:25.888533  739030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/client.crt with IP's: []
	I1101 12:02:26.365608  739030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/client.crt ...
	I1101 12:02:26.365641  739030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/client.crt: {Name:mkdb44687fc346c233c19072adcab3e2f2d21b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:26.365859  739030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/client.key ...
	I1101 12:02:26.365873  739030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/client.key: {Name:mkdc650b40e997bb76f4a5ae76594d3c343dd2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:26.365978  739030 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key.4fb12c14
	I1101 12:02:26.365998  739030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.crt.4fb12c14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 12:02:26.871980  739030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.crt.4fb12c14 ...
	I1101 12:02:26.872012  739030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.crt.4fb12c14: {Name:mke649d9874eddc1df56d63d35698f1f034e6936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:26.872209  739030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key.4fb12c14 ...
	I1101 12:02:26.872226  739030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key.4fb12c14: {Name:mk6b4d0eb7b1691fff08b5984117dbca119534ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:26.872318  739030 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.crt.4fb12c14 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.crt
	I1101 12:02:26.872396  739030 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key.4fb12c14 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key
	I1101 12:02:26.872459  739030 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key
	I1101 12:02:26.872477  739030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.crt with IP's: []
	I1101 12:02:27.236900  739030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.crt ...
	I1101 12:02:27.236934  739030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.crt: {Name:mk491dd05a41458a79448112c0cd2ea0155c60c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:27.237129  739030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key ...
	I1101 12:02:27.237147  739030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key: {Name:mk4948a86563f1c5e722ae50b9ae1e5d7faf892f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:27.237345  739030 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:02:27.237389  739030 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:02:27.237404  739030 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:02:27.237428  739030 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:02:27.237454  739030 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:02:27.237480  739030 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:02:27.237533  739030 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:02:27.238127  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:02:27.257119  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:02:27.276399  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:02:27.294578  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:02:27.314051  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 12:02:27.333465  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:02:27.351664  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:02:27.369422  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 12:02:27.388327  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:02:27.408018  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:02:27.428249  739030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:02:27.448534  739030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:02:27.463064  739030 ssh_runner.go:195] Run: openssl version
	I1101 12:02:27.472521  739030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:02:27.483659  739030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:02:27.487907  739030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:02:27.488029  739030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:02:27.530270  739030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:02:27.538997  739030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:02:27.552560  739030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:02:27.556972  739030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:02:27.557120  739030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:02:27.599413  739030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:02:27.608841  739030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:02:27.617224  739030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:02:27.621148  739030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:02:27.621243  739030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:02:27.662589  739030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:02:27.671041  739030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:02:27.674792  739030 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 12:02:27.674892  739030 kubeadm.go:401] StartCluster: {Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:02:27.674971  739030 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:02:27.675040  739030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:02:27.701924  739030 cri.go:89] found id: ""
	I1101 12:02:27.702046  739030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:02:27.710528  739030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 12:02:27.718522  739030 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 12:02:27.718629  739030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 12:02:27.726527  739030 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 12:02:27.726550  739030 kubeadm.go:158] found existing configuration files:
	
	I1101 12:02:27.726616  739030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 12:02:27.734305  739030 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 12:02:27.734373  739030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 12:02:27.741513  739030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 12:02:27.749236  739030 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 12:02:27.749322  739030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 12:02:27.756502  739030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 12:02:27.763989  739030 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 12:02:27.764056  739030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 12:02:27.771841  739030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 12:02:27.779677  739030 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 12:02:27.779784  739030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 12:02:27.787335  739030 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 12:02:27.827874  739030 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 12:02:27.827940  739030 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 12:02:27.852454  739030 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 12:02:27.852540  739030 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 12:02:27.852584  739030 kubeadm.go:319] OS: Linux
	I1101 12:02:27.852637  739030 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 12:02:27.852692  739030 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 12:02:27.852746  739030 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 12:02:27.852801  739030 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 12:02:27.852856  739030 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 12:02:27.852912  739030 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 12:02:27.852964  739030 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 12:02:27.853018  739030 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 12:02:27.853070  739030 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 12:02:27.925976  739030 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 12:02:27.926102  739030 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 12:02:27.926202  739030 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 12:02:27.934306  739030 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 12:02:27.482307  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	W1101 12:02:29.981143  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:27.937906  739030 out.go:252]   - Generating certificates and keys ...
	I1101 12:02:27.938040  739030 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 12:02:27.938133  739030 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 12:02:28.654088  739030 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 12:02:29.033543  739030 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 12:02:29.160005  739030 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 12:02:29.674085  739030 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 12:02:29.716823  739030 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 12:02:29.717239  739030 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-915456] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 12:02:31.344489  739030 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 12:02:31.344649  739030 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-915456] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 12:02:32.395503  739030 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 12:02:32.610543  739030 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 12:02:32.760566  739030 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 12:02:32.760908  739030 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 12:02:33.676649  739030 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 12:02:34.107698  739030 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 12:02:34.403375  739030 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 12:02:34.798418  739030 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 12:02:35.387029  739030 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 12:02:35.387692  739030 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 12:02:35.390351  739030 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 12:02:32.481051  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	W1101 12:02:34.482481  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:35.394000  739030 out.go:252]   - Booting up control plane ...
	I1101 12:02:35.394103  739030 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 12:02:35.394185  739030 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 12:02:35.394256  739030 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 12:02:35.412359  739030 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 12:02:35.412489  739030 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 12:02:35.420965  739030 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 12:02:35.421616  739030 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 12:02:35.421964  739030 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 12:02:35.570207  739030 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 12:02:35.570350  739030 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 12:02:36.569591  739030 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000742197s
	I1101 12:02:36.573410  739030 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 12:02:36.573524  739030 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 12:02:36.573922  739030 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 12:02:36.574021  739030 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 12:02:36.981107  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	W1101 12:02:38.981719  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	W1101 12:02:41.480741  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:40.698082  739030 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.124326818s
	I1101 12:02:41.171132  739030 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.597682827s
	I1101 12:02:43.076160  739030 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502701761s
	I1101 12:02:43.104138  739030 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 12:02:43.122376  739030 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 12:02:43.141102  739030 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 12:02:43.141313  739030 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-915456 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 12:02:43.158236  739030 kubeadm.go:319] [bootstrap-token] Using token: hxkyxs.1tcwhnz33u9o5dcb
	I1101 12:02:43.161362  739030 out.go:252]   - Configuring RBAC rules ...
	I1101 12:02:43.161488  739030 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 12:02:43.171068  739030 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 12:02:43.179870  739030 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 12:02:43.187321  739030 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 12:02:43.191816  739030 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 12:02:43.202345  739030 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 12:02:43.487647  739030 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 12:02:43.942688  739030 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 12:02:44.485442  739030 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 12:02:44.486928  739030 kubeadm.go:319] 
	I1101 12:02:44.487012  739030 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 12:02:44.487034  739030 kubeadm.go:319] 
	I1101 12:02:44.487120  739030 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 12:02:44.487129  739030 kubeadm.go:319] 
	I1101 12:02:44.487156  739030 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 12:02:44.487224  739030 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 12:02:44.487282  739030 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 12:02:44.487291  739030 kubeadm.go:319] 
	I1101 12:02:44.487349  739030 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 12:02:44.487357  739030 kubeadm.go:319] 
	I1101 12:02:44.487409  739030 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 12:02:44.487417  739030 kubeadm.go:319] 
	I1101 12:02:44.487473  739030 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 12:02:44.487557  739030 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 12:02:44.487633  739030 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 12:02:44.487641  739030 kubeadm.go:319] 
	I1101 12:02:44.487732  739030 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 12:02:44.487822  739030 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 12:02:44.487833  739030 kubeadm.go:319] 
	I1101 12:02:44.487928  739030 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hxkyxs.1tcwhnz33u9o5dcb \
	I1101 12:02:44.488042  739030 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 12:02:44.488068  739030 kubeadm.go:319] 	--control-plane 
	I1101 12:02:44.488081  739030 kubeadm.go:319] 
	I1101 12:02:44.488176  739030 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 12:02:44.488184  739030 kubeadm.go:319] 
	I1101 12:02:44.488272  739030 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hxkyxs.1tcwhnz33u9o5dcb \
	I1101 12:02:44.488387  739030 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 12:02:44.491805  739030 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 12:02:44.492063  739030 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 12:02:44.492182  739030 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 12:02:44.492199  739030 cni.go:84] Creating CNI manager for ""
	I1101 12:02:44.492208  739030 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:02:44.495464  739030 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 12:02:43.481761  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	W1101 12:02:45.482658  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:44.498415  739030 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 12:02:44.502800  739030 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 12:02:44.502821  739030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 12:02:44.516580  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 12:02:44.842104  739030 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 12:02:44.842173  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:44.842266  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-915456 minikube.k8s.io/updated_at=2025_11_01T12_02_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=newest-cni-915456 minikube.k8s.io/primary=true
	I1101 12:02:44.999055  739030 ops.go:34] apiserver oom_adj: -16
	I1101 12:02:44.999189  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:45.499316  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:46.000131  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:46.500129  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:46.999657  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:47.499523  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:47.999895  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:48.499315  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:49.000210  739030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:02:49.173258  739030 kubeadm.go:1114] duration metric: took 4.331147513s to wait for elevateKubeSystemPrivileges
	I1101 12:02:49.173293  739030 kubeadm.go:403] duration metric: took 21.498406209s to StartCluster
	I1101 12:02:49.173311  739030 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:49.173378  739030 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:02:49.174563  739030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:02:49.174843  739030 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:02:49.174991  739030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 12:02:49.175300  739030 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:02:49.175350  739030 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:02:49.175425  739030 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-915456"
	I1101 12:02:49.175445  739030 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-915456"
	I1101 12:02:49.175486  739030 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:02:49.176262  739030 addons.go:70] Setting default-storageclass=true in profile "newest-cni-915456"
	I1101 12:02:49.176290  739030 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-915456"
	I1101 12:02:49.176583  739030 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:49.176800  739030 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:49.178457  739030 out.go:179] * Verifying Kubernetes components...
	I1101 12:02:49.185650  739030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:02:49.215738  739030 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 12:02:49.219003  739030 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:02:49.219026  739030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:02:49.219099  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:49.225454  739030 addons.go:239] Setting addon default-storageclass=true in "newest-cni-915456"
	I1101 12:02:49.225498  739030 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:02:49.228581  739030 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:49.262387  739030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:02:49.266501  739030 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:02:49.266523  739030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:02:49.266588  739030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:49.299418  739030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:02:49.651734  739030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:02:49.661815  739030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 12:02:49.662024  739030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:02:49.666833  739030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:02:50.229329  739030 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:02:50.229416  739030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:02:50.229520  739030 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 12:02:50.259417  739030 api_server.go:72] duration metric: took 1.084532754s to wait for apiserver process to appear ...
	I1101 12:02:50.259443  739030 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:02:50.259474  739030 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 12:02:50.273578  739030 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 12:02:50.273831  739030 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 12:02:50.275176  739030 api_server.go:141] control plane version: v1.34.1
	I1101 12:02:50.275205  739030 api_server.go:131] duration metric: took 15.742357ms to wait for apiserver health ...
	I1101 12:02:50.275215  739030 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:02:50.277345  739030 addons.go:515] duration metric: took 1.101978214s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 12:02:50.278911  739030 system_pods.go:59] 9 kube-system pods found
	I1101 12:02:50.278961  739030 system_pods.go:61] "coredns-66bc5c9577-fwd4w" [18c6c47e-3e00-4794-887a-a05b3478a545] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 12:02:50.278973  739030 system_pods.go:61] "coredns-66bc5c9577-pr7sm" [46d2b5b5-cf75-4896-969d-da65926871d4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 12:02:50.278983  739030 system_pods.go:61] "etcd-newest-cni-915456" [c1377a6a-0f63-41c5-94d9-1c1bcf7c0049] Running
	I1101 12:02:50.278996  739030 system_pods.go:61] "kindnet-xtbw2" [f91412bc-141d-4706-a3b4-f173a4a731a3] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 12:02:50.279006  739030 system_pods.go:61] "kube-apiserver-newest-cni-915456" [86dc9fc3-c717-40db-b0cf-633dfdb0ea87] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:02:50.279014  739030 system_pods.go:61] "kube-controller-manager-newest-cni-915456" [cef60eef-ea38-49dc-b7e8-219972759c49] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:02:50.279029  739030 system_pods.go:61] "kube-proxy-4cxmx" [bf13f387-a80a-4910-8fef-45c3ace6b6c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 12:02:50.279040  739030 system_pods.go:61] "kube-scheduler-newest-cni-915456" [8d027bf3-40c5-4f8f-92f3-ae047cd94a2f] Running
	I1101 12:02:50.279046  739030 system_pods.go:61] "storage-provisioner" [693b39e3-8e8a-4380-8304-7513694bb16c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 12:02:50.279052  739030 system_pods.go:74] duration metric: took 3.832452ms to wait for pod list to return data ...
	I1101 12:02:50.279061  739030 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:02:50.287955  739030 default_sa.go:45] found service account: "default"
	I1101 12:02:50.287989  739030 default_sa.go:55] duration metric: took 8.920247ms for default service account to be created ...
	I1101 12:02:50.288004  739030 kubeadm.go:587] duration metric: took 1.113124779s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 12:02:50.288042  739030 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:02:50.300294  739030 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:02:50.300329  739030 node_conditions.go:123] node cpu capacity is 2
	I1101 12:02:50.300344  739030 node_conditions.go:105] duration metric: took 12.294099ms to run NodePressure ...
	I1101 12:02:50.300359  739030 start.go:242] waiting for startup goroutines ...
	I1101 12:02:50.733471  739030 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-915456" context rescaled to 1 replicas
	I1101 12:02:50.733512  739030 start.go:247] waiting for cluster config update ...
	I1101 12:02:50.733525  739030 start.go:256] writing updated cluster config ...
	I1101 12:02:50.733939  739030 ssh_runner.go:195] Run: rm -f paused
	I1101 12:02:50.802065  739030 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:02:50.805583  739030 out.go:179] * Done! kubectl is now configured to use "newest-cni-915456" cluster and "default" namespace by default
	W1101 12:02:47.981658  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	W1101 12:02:50.480773  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.500435016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.506375483Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f583d40f-be72-42ba-9d6e-04e0330347f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.509551901Z" level=info msg="Ran pod sandbox e36f64c3bed2227c68d9df3aabdf3ea8dc5fe1dda6e1541de41aa3c591569374 with infra container: kube-system/kindnet-xtbw2/POD" id=f583d40f-be72-42ba-9d6e-04e0330347f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.512243366Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f526a123-1822-4e27-8dee-d49a1656a6dc name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.518665652Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=62e5ff86-5fbc-4e9a-adc2-1062dab02515 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.528834128Z" level=info msg="Creating container: kube-system/kindnet-xtbw2/kindnet-cni" id=5f469c5c-96ee-4a05-9942-fd7e1a2f84ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.529138107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.53474129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.536007472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.560064029Z" level=info msg="Created container 271ef88b7b8bb448d465e9bdb15f4fb8c15b7d364f103d91ecb82a587db7aebe: kube-system/kindnet-xtbw2/kindnet-cni" id=5f469c5c-96ee-4a05-9942-fd7e1a2f84ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.56439554Z" level=info msg="Starting container: 271ef88b7b8bb448d465e9bdb15f4fb8c15b7d364f103d91ecb82a587db7aebe" id=b7db1857-a8df-4b53-bc15-f9add7f92450 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.566766672Z" level=info msg="Started container" PID=1485 containerID=271ef88b7b8bb448d465e9bdb15f4fb8c15b7d364f103d91ecb82a587db7aebe description=kube-system/kindnet-xtbw2/kindnet-cni id=b7db1857-a8df-4b53-bc15-f9add7f92450 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e36f64c3bed2227c68d9df3aabdf3ea8dc5fe1dda6e1541de41aa3c591569374
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.977058448Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-4cxmx/POD" id=36e531d8-f898-4934-b60d-5b5cce1f6c1a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.977134396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.981662143Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=36e531d8-f898-4934-b60d-5b5cce1f6c1a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.986955372Z" level=info msg="Ran pod sandbox 66b03aca064cfe4cc0b83b064b9d8be704a11d34d215d1c17012f99231131944 with infra container: kube-system/kube-proxy-4cxmx/POD" id=36e531d8-f898-4934-b60d-5b5cce1f6c1a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.988361339Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=01a5d5a3-1d30-43c0-b158-03e005f0bf2c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.991043745Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=786993b5-8290-4be9-8dd0-5dc2cb9a2650 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.997035494Z" level=info msg="Creating container: kube-system/kube-proxy-4cxmx/kube-proxy" id=8872d841-e439-40b5-b482-ab8a6213a57b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:02:50 newest-cni-915456 crio[839]: time="2025-11-01T12:02:50.997165244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:51 newest-cni-915456 crio[839]: time="2025-11-01T12:02:51.00476055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:51 newest-cni-915456 crio[839]: time="2025-11-01T12:02:51.005515702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:51 newest-cni-915456 crio[839]: time="2025-11-01T12:02:51.064439976Z" level=info msg="Created container 97d3c323838b4afde4b7bb4e19da906bdb34cfd67f6cdba6172f1ce0d28252f9: kube-system/kube-proxy-4cxmx/kube-proxy" id=8872d841-e439-40b5-b482-ab8a6213a57b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:02:51 newest-cni-915456 crio[839]: time="2025-11-01T12:02:51.069664224Z" level=info msg="Starting container: 97d3c323838b4afde4b7bb4e19da906bdb34cfd67f6cdba6172f1ce0d28252f9" id=c69a7bc0-2e9b-437b-b160-b170d22a967b name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:02:51 newest-cni-915456 crio[839]: time="2025-11-01T12:02:51.082565412Z" level=info msg="Started container" PID=1542 containerID=97d3c323838b4afde4b7bb4e19da906bdb34cfd67f6cdba6172f1ce0d28252f9 description=kube-system/kube-proxy-4cxmx/kube-proxy id=c69a7bc0-2e9b-437b-b160-b170d22a967b name=/runtime.v1.RuntimeService/StartContainer sandboxID=66b03aca064cfe4cc0b83b064b9d8be704a11d34d215d1c17012f99231131944
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	97d3c323838b4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   66b03aca064cf       kube-proxy-4cxmx                            kube-system
	271ef88b7b8bb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   e36f64c3bed22       kindnet-xtbw2                               kube-system
	e4a89ab09e821       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   722bd279f7fbf       etcd-newest-cni-915456                      kube-system
	39f48eb331833       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   2708c219fbbf1       kube-controller-manager-newest-cni-915456   kube-system
	ca2b0b4ba7a93       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   6fbaee3ad5498       kube-scheduler-newest-cni-915456            kube-system
	1e7a483876c6b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   e77cfff22b379       kube-apiserver-newest-cni-915456            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-915456
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-915456
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=newest-cni-915456
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T12_02_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 12:02:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-915456
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:02:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:02:44 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:02:44 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:02:44 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 12:02:44 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-915456
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0c03d2de-2716-4951-b7fa-b9e1f188afd7
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-915456                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-xtbw2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-915456             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-915456    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-4cxmx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-915456             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 1s    kube-proxy       
	  Normal   Starting                 9s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s    kubelet          Node newest-cni-915456 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-915456 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s    kubelet          Node newest-cni-915456 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s    node-controller  Node newest-cni-915456 event: Registered Node newest-cni-915456 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	[ +52.263508] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e4a89ab09e8213ad4a159b51c14feb3ad122b5b0d453c582bc28f07748f6024a] <==
	{"level":"warn","ts":"2025-11-01T12:02:39.255235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.278387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.311488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.335209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.350774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.413775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.428446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.481446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.505385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.539288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.602030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.625972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.646741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.672056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.736556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.758184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.805444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.825903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.855021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.887901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.913931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:39.981808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:40.006887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:40.039335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:40.201994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36496","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:02:52 up  3:45,  0 user,  load average: 4.10, 3.83, 3.03
	Linux newest-cni-915456 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [271ef88b7b8bb448d465e9bdb15f4fb8c15b7d364f103d91ecb82a587db7aebe] <==
	I1101 12:02:50.626646       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:02:50.626873       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 12:02:50.626992       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:02:50.627012       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:02:50.627026       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:02:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:02:50.921902       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:02:50.921930       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:02:50.921940       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:02:50.922275       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [1e7a483876c6bc526342046e7e707014a3eedebd13cb5abe09896f1a8aedc18e] <==
	I1101 12:02:41.160873       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 12:02:41.169081       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 12:02:41.171820       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 12:02:41.173559       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 12:02:41.175019       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 12:02:41.182022       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 12:02:41.186305       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:02:41.190676       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 12:02:41.873918       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 12:02:41.878953       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 12:02:41.878979       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:02:42.713470       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:02:42.819965       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:02:42.964595       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 12:02:42.973499       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 12:02:42.974784       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 12:02:42.982306       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 12:02:43.025156       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 12:02:43.912768       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:02:43.940606       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 12:02:43.955149       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 12:02:48.683025       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:02:48.687553       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:02:49.032643       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:02:49.081776       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [39f48eb331833322d0a3f02fe9298a0dff3e4c2979785e4401f5e3765c2d7e57] <==
	I1101 12:02:48.031548       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 12:02:48.032426       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 12:02:48.032461       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 12:02:48.032588       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 12:02:48.032603       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 12:02:48.032629       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 12:02:48.038956       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 12:02:48.043344       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 12:02:48.043440       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:02:48.046958       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 12:02:48.047675       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-915456" podCIDRs=["10.42.0.0/24"]
	I1101 12:02:48.049843       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 12:02:48.057215       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 12:02:48.066745       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 12:02:48.073820       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 12:02:48.073875       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 12:02:48.073943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:02:48.073951       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 12:02:48.073974       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 12:02:48.074041       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 12:02:48.074316       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 12:02:48.074906       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 12:02:48.074945       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 12:02:48.074961       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 12:02:48.080385       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [97d3c323838b4afde4b7bb4e19da906bdb34cfd67f6cdba6172f1ce0d28252f9] <==
	I1101 12:02:51.136315       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:02:51.231490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:02:51.332532       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:02:51.332565       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 12:02:51.332650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:02:51.353053       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:02:51.353185       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:02:51.358372       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:02:51.358775       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:02:51.359058       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:02:51.360570       1 config.go:200] "Starting service config controller"
	I1101 12:02:51.360625       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:02:51.360665       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:02:51.360691       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:02:51.360735       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:02:51.360762       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:02:51.363273       1 config.go:309] "Starting node config controller"
	I1101 12:02:51.364117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:02:51.364169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:02:51.461920       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:02:51.461952       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 12:02:51.461995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ca2b0b4ba7a931be0f03a537ce8af5031614b719608c6fd3650424c260713f5f] <==
	E1101 12:02:41.155396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 12:02:41.165769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 12:02:41.165904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 12:02:41.165988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 12:02:41.166069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 12:02:41.166177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 12:02:41.166260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 12:02:41.166342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 12:02:41.169763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 12:02:41.169825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 12:02:41.169867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 12:02:41.176176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 12:02:41.176317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 12:02:41.176408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 12:02:41.176516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 12:02:41.970387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 12:02:41.982218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 12:02:42.030766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 12:02:42.082683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 12:02:42.111959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 12:02:42.133677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 12:02:42.181581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 12:02:42.359918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 12:02:42.711909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 12:02:45.646003       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:02:44 newest-cni-915456 kubelet[1299]: I1101 12:02:44.894094    1299 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 12:02:45 newest-cni-915456 kubelet[1299]: I1101 12:02:45.084440    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-915456" podStartSLOduration=1.084417835 podStartE2EDuration="1.084417835s" podCreationTimestamp="2025-11-01 12:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:45.042349476 +0000 UTC m=+1.284919524" watchObservedRunningTime="2025-11-01 12:02:45.084417835 +0000 UTC m=+1.326987883"
	Nov 01 12:02:45 newest-cni-915456 kubelet[1299]: I1101 12:02:45.084642    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-915456" podStartSLOduration=1.084634544 podStartE2EDuration="1.084634544s" podCreationTimestamp="2025-11-01 12:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:45.083859568 +0000 UTC m=+1.326429796" watchObservedRunningTime="2025-11-01 12:02:45.084634544 +0000 UTC m=+1.327204592"
	Nov 01 12:02:45 newest-cni-915456 kubelet[1299]: I1101 12:02:45.130686    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-915456" podStartSLOduration=1.130651688 podStartE2EDuration="1.130651688s" podCreationTimestamp="2025-11-01 12:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:45.109993936 +0000 UTC m=+1.352564082" watchObservedRunningTime="2025-11-01 12:02:45.130651688 +0000 UTC m=+1.373221744"
	Nov 01 12:02:45 newest-cni-915456 kubelet[1299]: I1101 12:02:45.174541    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-915456" podStartSLOduration=1.1745179860000001 podStartE2EDuration="1.174517986s" podCreationTimestamp="2025-11-01 12:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:45.131702172 +0000 UTC m=+1.374272228" watchObservedRunningTime="2025-11-01 12:02:45.174517986 +0000 UTC m=+1.417088223"
	Nov 01 12:02:48 newest-cni-915456 kubelet[1299]: I1101 12:02:48.063080    1299 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 12:02:48 newest-cni-915456 kubelet[1299]: I1101 12:02:48.064254    1299 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: E1101 12:02:49.242195    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-4cxmx\" is forbidden: User \"system:node:newest-cni-915456\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-915456' and this object" podUID="bf13f387-a80a-4910-8fef-45c3ace6b6c8" pod="kube-system/kube-proxy-4cxmx"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: E1101 12:02:49.242306    1299 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-915456\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-915456' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: E1101 12:02:49.242369    1299 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-915456\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-915456' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: I1101 12:02:49.254532    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf13f387-a80a-4910-8fef-45c3ace6b6c8-xtables-lock\") pod \"kube-proxy-4cxmx\" (UID: \"bf13f387-a80a-4910-8fef-45c3ace6b6c8\") " pod="kube-system/kube-proxy-4cxmx"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: I1101 12:02:49.254575    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf13f387-a80a-4910-8fef-45c3ace6b6c8-lib-modules\") pod \"kube-proxy-4cxmx\" (UID: \"bf13f387-a80a-4910-8fef-45c3ace6b6c8\") " pod="kube-system/kube-proxy-4cxmx"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: I1101 12:02:49.254597    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf13f387-a80a-4910-8fef-45c3ace6b6c8-kube-proxy\") pod \"kube-proxy-4cxmx\" (UID: \"bf13f387-a80a-4910-8fef-45c3ace6b6c8\") " pod="kube-system/kube-proxy-4cxmx"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: I1101 12:02:49.254615    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zpc4\" (UniqueName: \"kubernetes.io/projected/bf13f387-a80a-4910-8fef-45c3ace6b6c8-kube-api-access-2zpc4\") pod \"kube-proxy-4cxmx\" (UID: \"bf13f387-a80a-4910-8fef-45c3ace6b6c8\") " pod="kube-system/kube-proxy-4cxmx"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: E1101 12:02:49.317262    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-xtbw2\" is forbidden: User \"system:node:newest-cni-915456\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-915456' and this object" podUID="f91412bc-141d-4706-a3b4-f173a4a731a3" pod="kube-system/kindnet-xtbw2"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: I1101 12:02:49.355616    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f91412bc-141d-4706-a3b4-f173a4a731a3-xtables-lock\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: I1101 12:02:49.355811    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsqqf\" (UniqueName: \"kubernetes.io/projected/f91412bc-141d-4706-a3b4-f173a4a731a3-kube-api-access-bsqqf\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: I1101 12:02:49.355965    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f91412bc-141d-4706-a3b4-f173a4a731a3-cni-cfg\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:02:49 newest-cni-915456 kubelet[1299]: I1101 12:02:49.356047    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f91412bc-141d-4706-a3b4-f173a4a731a3-lib-modules\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:02:50 newest-cni-915456 kubelet[1299]: E1101 12:02:50.356823    1299 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 01 12:02:50 newest-cni-915456 kubelet[1299]: E1101 12:02:50.356926    1299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bf13f387-a80a-4910-8fef-45c3ace6b6c8-kube-proxy podName:bf13f387-a80a-4910-8fef-45c3ace6b6c8 nodeName:}" failed. No retries permitted until 2025-11-01 12:02:50.856903184 +0000 UTC m=+7.099473224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bf13f387-a80a-4910-8fef-45c3ace6b6c8-kube-proxy") pod "kube-proxy-4cxmx" (UID: "bf13f387-a80a-4910-8fef-45c3ace6b6c8") : failed to sync configmap cache: timed out waiting for the condition
	Nov 01 12:02:50 newest-cni-915456 kubelet[1299]: I1101 12:02:50.358602    1299 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 12:02:50 newest-cni-915456 kubelet[1299]: W1101 12:02:50.984827    1299 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/crio-66b03aca064cfe4cc0b83b064b9d8be704a11d34d215d1c17012f99231131944 WatchSource:0}: Error finding container 66b03aca064cfe4cc0b83b064b9d8be704a11d34d215d1c17012f99231131944: Status 404 returned error can't find the container with id 66b03aca064cfe4cc0b83b064b9d8be704a11d34d215d1c17012f99231131944
	Nov 01 12:02:51 newest-cni-915456 kubelet[1299]: I1101 12:02:51.662399    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xtbw2" podStartSLOduration=2.662379601 podStartE2EDuration="2.662379601s" podCreationTimestamp="2025-11-01 12:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:51.041986607 +0000 UTC m=+7.284556663" watchObservedRunningTime="2025-11-01 12:02:51.662379601 +0000 UTC m=+7.904949657"
	Nov 01 12:02:52 newest-cni-915456 kubelet[1299]: I1101 12:02:52.077881    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4cxmx" podStartSLOduration=3.077862534 podStartE2EDuration="3.077862534s" podCreationTimestamp="2025-11-01 12:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:52.054155643 +0000 UTC m=+8.296725691" watchObservedRunningTime="2025-11-01 12:02:52.077862534 +0000 UTC m=+8.320432590"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-915456 -n newest-cni-915456
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-915456 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-fwd4w storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-915456 describe pod coredns-66bc5c9577-fwd4w storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-915456 describe pod coredns-66bc5c9577-fwd4w storage-provisioner: exit status 1 (82.895414ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-fwd4w" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-915456 describe pod coredns-66bc5c9577-fwd4w storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-772362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-772362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (337.672992ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-772362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-772362 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-772362 describe deploy/metrics-server -n kube-system: exit status 1 (132.439389ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-772362 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-772362
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-772362:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0",
	        "Created": "2025-11-01T12:01:37.247472685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 735620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:01:37.350522615Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/hosts",
	        "LogPath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0-json.log",
	        "Name": "/default-k8s-diff-port-772362",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-772362:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-772362",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0",
	                "LowerDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-772362",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-772362/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-772362",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-772362",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-772362",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a0e66faad3b1142d76a6bcf32ba77cdd1fdd4ccf6c5fde1cd0cdbeb47bec50b",
	            "SandboxKey": "/var/run/docker/netns/1a0e66faad3b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-772362": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:16:fd:ea:e9:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73eb4efd47c2bd595401a91b3c40a866a38f38c55c2d40593383e02853a1364a",
	                    "EndpointID": "fa026dfb60baa055463509dcd8e9653ebb51d1af3e7893d592e3a5b37aef1692",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-772362",
	                        "087d99a3919f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-772362 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-772362 logs -n 25: (1.824594194s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 11:59 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable metrics-server -p no-preload-198717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p no-preload-198717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p embed-certs-816860 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-816860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ image   │ no-preload-198717 image list --format=json                                                                                                                                                                                                    │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p disable-driver-mounts-783522                                                                                                                                                                                                               │ disable-driver-mounts-783522 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:02 UTC │
	│ image   │ embed-certs-816860 image list --format=json                                                                                                                                                                                                   │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ pause   │ -p embed-certs-816860 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-915456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ stop    │ -p newest-cni-915456 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-915456 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:02:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:02:55.113345  742300 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:02:55.113469  742300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:55.113480  742300 out.go:374] Setting ErrFile to fd 2...
	I1101 12:02:55.113485  742300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:55.113774  742300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:02:55.114201  742300 out.go:368] Setting JSON to false
	I1101 12:02:55.115168  742300 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13524,"bootTime":1761985051,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:02:55.115242  742300 start.go:143] virtualization:  
	I1101 12:02:55.118763  742300 out.go:179] * [newest-cni-915456] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:02:55.122634  742300 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:02:55.122810  742300 notify.go:221] Checking for updates...
	I1101 12:02:55.128526  742300 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:02:55.131415  742300 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:02:55.134310  742300 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:02:55.137422  742300 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:02:55.140376  742300 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 12:02:55.143676  742300 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:02:55.144287  742300 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:02:55.181842  742300 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:02:55.182023  742300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:02:55.245425  742300 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:02:55.235879687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:02:55.245539  742300 docker.go:319] overlay module found
	I1101 12:02:55.248773  742300 out.go:179] * Using the docker driver based on existing profile
	I1101 12:02:55.251727  742300 start.go:309] selected driver: docker
	I1101 12:02:55.251755  742300 start.go:930] validating driver "docker" against &{Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:02:55.251855  742300 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:02:55.252587  742300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:02:55.321050  742300 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:02:55.301769047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:02:55.321449  742300 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 12:02:55.321486  742300 cni.go:84] Creating CNI manager for ""
	I1101 12:02:55.321544  742300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:02:55.321585  742300 start.go:353] cluster config:
	{Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:02:55.326247  742300 out.go:179] * Starting "newest-cni-915456" primary control-plane node in "newest-cni-915456" cluster
	I1101 12:02:55.329084  742300 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:02:55.331945  742300 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:02:55.334645  742300 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:02:55.334702  742300 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:02:55.334734  742300 cache.go:59] Caching tarball of preloaded images
	I1101 12:02:55.334733  742300 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:02:55.334817  742300 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:02:55.334827  742300 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:02:55.334963  742300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/config.json ...
	I1101 12:02:55.356098  742300 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:02:55.356122  742300 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:02:55.356142  742300 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:02:55.356166  742300 start.go:360] acquireMachinesLock for newest-cni-915456: {Name:mkb1ddd4203c8257583d515453d1119aaa07ce06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:02:55.356242  742300 start.go:364] duration metric: took 54.352µs to acquireMachinesLock for "newest-cni-915456"
	I1101 12:02:55.356263  742300 start.go:96] Skipping create...Using existing machine configuration
	I1101 12:02:55.356272  742300 fix.go:54] fixHost starting: 
	I1101 12:02:55.356543  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:55.374221  742300 fix.go:112] recreateIfNeeded on newest-cni-915456: state=Stopped err=<nil>
	W1101 12:02:55.374254  742300 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 12:02:52.482081  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:52.990050  735220 node_ready.go:49] node "default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:52.990077  735220 node_ready.go:38] duration metric: took 39.512097205s for node "default-k8s-diff-port-772362" to be "Ready" ...
	I1101 12:02:52.990090  735220 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:02:52.990149  735220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:02:53.028559  735220 api_server.go:72] duration metric: took 41.630879044s to wait for apiserver process to appear ...
	I1101 12:02:53.028582  735220 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:02:53.028604  735220 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 12:02:53.044065  735220 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 12:02:53.045253  735220 api_server.go:141] control plane version: v1.34.1
	I1101 12:02:53.045319  735220 api_server.go:131] duration metric: took 16.728708ms to wait for apiserver health ...
	I1101 12:02:53.045341  735220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:02:53.048782  735220 system_pods.go:59] 8 kube-system pods found
	I1101 12:02:53.048816  735220 system_pods.go:61] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.048823  735220 system_pods.go:61] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.048829  735220 system_pods.go:61] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.048834  735220 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.048839  735220 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.048844  735220 system_pods.go:61] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.048848  735220 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.048855  735220 system_pods.go:61] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.048861  735220 system_pods.go:74] duration metric: took 3.500665ms to wait for pod list to return data ...
	I1101 12:02:53.048869  735220 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:02:53.052663  735220 default_sa.go:45] found service account: "default"
	I1101 12:02:53.052712  735220 default_sa.go:55] duration metric: took 3.825305ms for default service account to be created ...
	I1101 12:02:53.052734  735220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 12:02:53.059175  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.059265  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.059289  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.059328  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.059358  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.059378  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.059416  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.059437  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.059469  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.059528  735220 retry.go:31] will retry after 214.6601ms: missing components: kube-dns
	I1101 12:02:53.280714  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.280745  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.280752  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.280758  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.280762  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.280767  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.280770  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.280775  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.280782  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.280796  735220 retry.go:31] will retry after 322.159037ms: missing components: kube-dns
	I1101 12:02:53.606672  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.606705  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.606712  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.606719  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.606723  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.606727  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.606733  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.606737  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.606745  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.606760  735220 retry.go:31] will retry after 316.945096ms: missing components: kube-dns
	I1101 12:02:53.934099  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.934231  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Running
	I1101 12:02:53.934287  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.934323  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.934352  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.934399  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.934430  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.934481  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.934529  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Running
	I1101 12:02:53.934562  735220 system_pods.go:126] duration metric: took 881.815622ms to wait for k8s-apps to be running ...
	I1101 12:02:53.934623  735220 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 12:02:53.934747  735220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:02:53.957397  735220 system_svc.go:56] duration metric: took 22.771609ms WaitForService to wait for kubelet
	I1101 12:02:53.957424  735220 kubeadm.go:587] duration metric: took 42.559749043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:02:53.957450  735220 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:02:53.961797  735220 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:02:53.961836  735220 node_conditions.go:123] node cpu capacity is 2
	I1101 12:02:53.961856  735220 node_conditions.go:105] duration metric: took 4.400048ms to run NodePressure ...
	I1101 12:02:53.961869  735220 start.go:242] waiting for startup goroutines ...
	I1101 12:02:53.961877  735220 start.go:247] waiting for cluster config update ...
	I1101 12:02:53.961888  735220 start.go:256] writing updated cluster config ...
	I1101 12:02:53.962236  735220 ssh_runner.go:195] Run: rm -f paused
	I1101 12:02:53.967844  735220 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:02:53.972134  735220 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.976963  735220 pod_ready.go:94] pod "coredns-66bc5c9577-czvv4" is "Ready"
	I1101 12:02:53.976989  735220 pod_ready.go:86] duration metric: took 4.827936ms for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.979442  735220 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.984113  735220 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:53.984143  735220 pod_ready.go:86] duration metric: took 4.67476ms for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.986754  735220 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.991981  735220 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:53.992020  735220 pod_ready.go:86] duration metric: took 5.236917ms for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.995095  735220 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:54.371944  735220 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:54.371973  735220 pod_ready.go:86] duration metric: took 376.850795ms for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:54.571896  735220 pod_ready.go:83] waiting for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:54.972543  735220 pod_ready.go:94] pod "kube-proxy-7bbw7" is "Ready"
	I1101 12:02:54.972566  735220 pod_ready.go:86] duration metric: took 400.644169ms for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:55.175953  735220 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:55.572218  735220 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:55.572251  735220 pod_ready.go:86] duration metric: took 396.270148ms for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:55.572267  735220 pod_ready.go:40] duration metric: took 1.604376454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:02:55.645071  735220 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:02:55.648372  735220 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772362" cluster and "default" namespace by default
	I1101 12:02:55.377713  742300 out.go:252] * Restarting existing docker container for "newest-cni-915456" ...
	I1101 12:02:55.377805  742300 cli_runner.go:164] Run: docker start newest-cni-915456
	I1101 12:02:55.668898  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:55.719097  742300 kic.go:430] container "newest-cni-915456" state is running.
	I1101 12:02:55.719477  742300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:02:55.745862  742300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/config.json ...
	I1101 12:02:55.746093  742300 machine.go:94] provisionDockerMachine start ...
	I1101 12:02:55.746155  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:55.767463  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:55.767776  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:02:55.767788  742300 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:02:55.768440  742300 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 12:02:58.921386  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-915456
	
	I1101 12:02:58.921414  742300 ubuntu.go:182] provisioning hostname "newest-cni-915456"
	I1101 12:02:58.921479  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:58.939538  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:58.939853  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:02:58.939871  742300 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-915456 && echo "newest-cni-915456" | sudo tee /etc/hostname
	I1101 12:02:59.103258  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-915456
	
	I1101 12:02:59.103352  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:59.127529  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:59.127845  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:02:59.127867  742300 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-915456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-915456/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-915456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:02:59.278130  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:02:59.278154  742300 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:02:59.278183  742300 ubuntu.go:190] setting up certificates
	I1101 12:02:59.278199  742300 provision.go:84] configureAuth start
	I1101 12:02:59.278259  742300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:02:59.297213  742300 provision.go:143] copyHostCerts
	I1101 12:02:59.297292  742300 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:02:59.297311  742300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:02:59.297408  742300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:02:59.297580  742300 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:02:59.297592  742300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:02:59.297633  742300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:02:59.297770  742300 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:02:59.297781  742300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:02:59.297829  742300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:02:59.297916  742300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.newest-cni-915456 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-915456]
	I1101 12:02:59.896330  742300 provision.go:177] copyRemoteCerts
	I1101 12:02:59.896440  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:02:59.896515  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:59.914686  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:00.063883  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 12:03:00.108301  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:03:00.178457  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 12:03:00.248735  742300 provision.go:87] duration metric: took 970.518964ms to configureAuth
	I1101 12:03:00.248762  742300 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:03:00.248991  742300 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:00.249155  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:00.329441  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:00.329886  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:03:00.329905  742300 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:03:00.698138  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:03:00.698165  742300 machine.go:97] duration metric: took 4.952059354s to provisionDockerMachine
	I1101 12:03:00.698177  742300 start.go:293] postStartSetup for "newest-cni-915456" (driver="docker")
	I1101 12:03:00.698188  742300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:03:00.698251  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:03:00.698313  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:00.719770  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:00.826001  742300 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:03:00.829465  742300 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:03:00.829497  742300 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:03:00.829510  742300 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:03:00.829565  742300 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:03:00.829646  742300 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:03:00.829793  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:03:00.837903  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:00.858360  742300 start.go:296] duration metric: took 160.167322ms for postStartSetup
	I1101 12:03:00.858469  742300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:03:00.858520  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:00.875867  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:00.978729  742300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:03:00.984107  742300 fix.go:56] duration metric: took 5.627820441s for fixHost
	I1101 12:03:00.984142  742300 start.go:83] releasing machines lock for "newest-cni-915456", held for 5.627888282s
	I1101 12:03:00.984222  742300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:03:01.003035  742300 ssh_runner.go:195] Run: cat /version.json
	I1101 12:03:01.003103  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:01.003223  742300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:03:01.003279  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:01.024454  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:01.031352  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:01.125655  742300 ssh_runner.go:195] Run: systemctl --version
	I1101 12:03:01.223022  742300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:03:01.265485  742300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:03:01.270236  742300 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:03:01.270311  742300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:03:01.278980  742300 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 12:03:01.279004  742300 start.go:496] detecting cgroup driver to use...
	I1101 12:03:01.279038  742300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:03:01.279087  742300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:03:01.295098  742300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:03:01.308561  742300 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:03:01.308628  742300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:03:01.325162  742300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:03:01.344433  742300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:03:01.466028  742300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:03:01.590309  742300 docker.go:234] disabling docker service ...
	I1101 12:03:01.590428  742300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:03:01.606044  742300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:03:01.619039  742300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:03:01.737193  742300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:03:01.857808  742300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:03:01.870765  742300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:03:01.884834  742300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:03:01.884944  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.894394  742300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:03:01.894470  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.903381  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.912372  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.921179  742300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:03:01.929237  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.938202  742300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.946628  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.955746  742300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:03:01.963390  742300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:03:01.970762  742300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:02.090551  742300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:03:02.230555  742300 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:03:02.230684  742300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:03:02.234641  742300 start.go:564] Will wait 60s for crictl version
	I1101 12:03:02.234757  742300 ssh_runner.go:195] Run: which crictl
	I1101 12:03:02.238409  742300 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:03:02.263419  742300 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:03:02.263515  742300 ssh_runner.go:195] Run: crio --version
	I1101 12:03:02.292976  742300 ssh_runner.go:195] Run: crio --version
	I1101 12:03:02.326610  742300 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:03:02.329498  742300 cli_runner.go:164] Run: docker network inspect newest-cni-915456 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:03:02.346289  742300 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 12:03:02.350263  742300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:02.363268  742300 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 12:03:02.365999  742300 kubeadm.go:884] updating cluster {Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:03:02.366147  742300 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:02.366226  742300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:02.399320  742300 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:02.399343  742300 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:03:02.399409  742300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:02.426206  742300 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:02.426231  742300 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:03:02.426240  742300 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 12:03:02.426341  742300 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-915456 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:03:02.426430  742300 ssh_runner.go:195] Run: crio config
	I1101 12:03:02.511464  742300 cni.go:84] Creating CNI manager for ""
	I1101 12:03:02.511488  742300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:03:02.511511  742300 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 12:03:02.511536  742300 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-915456 NodeName:newest-cni-915456 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:03:02.511679  742300 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-915456"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:03:02.511758  742300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:03:02.520041  742300 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:03:02.520131  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:03:02.527930  742300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 12:03:02.541649  742300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:03:02.563865  742300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 12:03:02.578660  742300 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:03:02.582471  742300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:02.592964  742300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:02.703925  742300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:02.721362  742300 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456 for IP: 192.168.76.2
	I1101 12:03:02.721393  742300 certs.go:195] generating shared ca certs ...
	I1101 12:03:02.721410  742300 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:02.721578  742300 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:03:02.721637  742300 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:03:02.721650  742300 certs.go:257] generating profile certs ...
	I1101 12:03:02.721812  742300 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/client.key
	I1101 12:03:02.721891  742300 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key.4fb12c14
	I1101 12:03:02.721956  742300 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key
	I1101 12:03:02.722081  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:03:02.722123  742300 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:03:02.722138  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:03:02.722165  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:03:02.722202  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:03:02.722231  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:03:02.722286  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:02.722946  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:03:02.742888  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:03:02.759545  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:03:02.776109  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:03:02.799826  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 12:03:02.827576  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:03:02.849554  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:03:02.875572  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 12:03:02.905654  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:03:02.930783  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:03:02.950391  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:03:02.971052  742300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:03:02.985185  742300 ssh_runner.go:195] Run: openssl version
	I1101 12:03:02.991714  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:03:03.000341  742300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:03:03.006664  742300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:03:03.006759  742300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:03:03.058297  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:03:03.067007  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:03:03.075850  742300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:03.079839  742300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:03.079905  742300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:03.120911  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:03:03.128883  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:03:03.138026  742300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:03:03.141876  742300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:03:03.141955  742300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:03:03.182969  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:03:03.191093  742300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:03:03.194870  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 12:03:03.236383  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 12:03:03.277943  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 12:03:03.320120  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 12:03:03.380228  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 12:03:03.465231  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 12:03:03.518647  742300 kubeadm.go:401] StartCluster: {Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:03:03.518771  742300 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:03:03.518886  742300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:03:03.620358  742300 cri.go:89] found id: "e735e98659987111572eec249f828f7621bfaba194220e2c493a43e703434f5e"
	I1101 12:03:03.620405  742300 cri.go:89] found id: "692d04809b9f0753902fb84cccb9fca957c437d518ababe36294a45488b0a1ff"
	I1101 12:03:03.620427  742300 cri.go:89] found id: "e6d473f5be1fd68186a2bdf1e8a283616a64e2e4850a5aede158448888d098b7"
	I1101 12:03:03.620438  742300 cri.go:89] found id: "604ffe25b066ea1ca6f3cb37923272814ecc5129a5eb18e635d4fa3cf43a27e8"
	I1101 12:03:03.620442  742300 cri.go:89] found id: ""
	I1101 12:03:03.620509  742300 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 12:03:03.637251  742300 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:03Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:03:03.637365  742300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:03:03.649760  742300 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 12:03:03.649795  742300 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 12:03:03.649885  742300 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 12:03:03.659593  742300 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 12:03:03.660244  742300 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-915456" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:03.660631  742300 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-915456" cluster setting kubeconfig missing "newest-cni-915456" context setting]
	I1101 12:03:03.661183  742300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:03.662931  742300 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 12:03:03.675027  742300 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 12:03:03.675061  742300 kubeadm.go:602] duration metric: took 25.25942ms to restartPrimaryControlPlane
	I1101 12:03:03.675071  742300 kubeadm.go:403] duration metric: took 156.440898ms to StartCluster
	I1101 12:03:03.675120  742300 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:03.675201  742300 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:03.676207  742300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:03.676486  742300 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:03:03.676885  742300 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:03.676967  742300 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:03:03.677155  742300 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-915456"
	I1101 12:03:03.677200  742300 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-915456"
	W1101 12:03:03.677223  742300 addons.go:248] addon storage-provisioner should already be in state true
	I1101 12:03:03.677275  742300 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:03.677840  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.681616  742300 addons.go:70] Setting default-storageclass=true in profile "newest-cni-915456"
	I1101 12:03:03.681673  742300 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-915456"
	I1101 12:03:03.681770  742300 addons.go:70] Setting dashboard=true in profile "newest-cni-915456"
	I1101 12:03:03.681809  742300 addons.go:239] Setting addon dashboard=true in "newest-cni-915456"
	W1101 12:03:03.681822  742300 addons.go:248] addon dashboard should already be in state true
	I1101 12:03:03.681851  742300 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:03.682088  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.682371  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.684681  742300 out.go:179] * Verifying Kubernetes components...
	I1101 12:03:03.688181  742300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:03.734816  742300 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 12:03:03.740926  742300 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 12:03:03.741988  742300 addons.go:239] Setting addon default-storageclass=true in "newest-cni-915456"
	W1101 12:03:03.742006  742300 addons.go:248] addon default-storageclass should already be in state true
	I1101 12:03:03.742032  742300 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:03.742524  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.745820  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 12:03:03.745840  742300 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 12:03:03.745900  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:03.748010  742300 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 12:03:03.751678  742300 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:03.751698  742300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:03:03.751761  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:03.789889  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:03.813761  742300 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:03.813791  742300 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:03:03.813864  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:03.815245  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:03.847186  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:04.043475  742300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:04.104972  742300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:04.124695  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 12:03:04.124717  742300 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 12:03:04.223345  742300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:04.320131  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 12:03:04.320153  742300 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 12:03:04.424610  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 12:03:04.424640  742300 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 12:03:04.542767  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 12:03:04.542790  742300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 12:03:04.615073  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 12:03:04.615101  742300 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 12:03:04.679410  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 12:03:04.679440  742300 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 12:03:04.726161  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 12:03:04.726182  742300 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 12:03:04.764773  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 12:03:04.764794  742300 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 12:03:04.809930  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:03:04.809956  742300 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 12:03:04.839568  742300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 01 12:02:53 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:53.212373599Z" level=info msg="Created container 91455b4e01dc9ca88783b20ff3b1aa872d3acb71560213af3e773be5b18916b8: kube-system/coredns-66bc5c9577-czvv4/coredns" id=cdb03bb6-d6ab-4118-94cd-6793aa73f07f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:02:53 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:53.218308733Z" level=info msg="Starting container: 91455b4e01dc9ca88783b20ff3b1aa872d3acb71560213af3e773be5b18916b8" id=ce99ca9e-71be-4fcf-9123-bc2af098020a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:02:53 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:53.221521418Z" level=info msg="Started container" PID=1752 containerID=91455b4e01dc9ca88783b20ff3b1aa872d3acb71560213af3e773be5b18916b8 description=kube-system/coredns-66bc5c9577-czvv4/coredns id=ce99ca9e-71be-4fcf-9123-bc2af098020a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8113e59b6bea27442a26b3a287e7f91de51147362e3429f8daf4381f09e2f45
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.312854444Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8f044422-eda9-4d66-bccf-36ac17fdf188 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.312926658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.322328708Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a23c1ded7a0b963c2fe0ba8d56ceed0e99a26e19c6363e438b70ab7ba92a0ca9 UID:3b3c8cec-2ef2-493b-987d-c2ebda1abcd9 NetNS:/var/run/netns/bf2b41ba-ce65-476b-be47-7a4a69bd66f4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004a8f90}] Aliases:map[]}"
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.322368511Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.333212131Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a23c1ded7a0b963c2fe0ba8d56ceed0e99a26e19c6363e438b70ab7ba92a0ca9 UID:3b3c8cec-2ef2-493b-987d-c2ebda1abcd9 NetNS:/var/run/netns/bf2b41ba-ce65-476b-be47-7a4a69bd66f4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004a8f90}] Aliases:map[]}"
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.333365454Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.342069024Z" level=info msg="Ran pod sandbox a23c1ded7a0b963c2fe0ba8d56ceed0e99a26e19c6363e438b70ab7ba92a0ca9 with infra container: default/busybox/POD" id=8f044422-eda9-4d66-bccf-36ac17fdf188 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.343365763Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=663c7bff-a7b2-4026-8c3b-76f7f7ccb08b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.343500305Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=663c7bff-a7b2-4026-8c3b-76f7f7ccb08b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.343543325Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=663c7bff-a7b2-4026-8c3b-76f7f7ccb08b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.346990401Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4aff36e8-8e26-45d0-b3fd-f2dcace2a67f name=/runtime.v1.ImageService/PullImage
	Nov 01 12:02:56 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:56.35094237Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.427352403Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=4aff36e8-8e26-45d0-b3fd-f2dcace2a67f name=/runtime.v1.ImageService/PullImage
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.428241613Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56f681ce-f326-47fa-abf9-2637fe4236bd name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.431357419Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ee5b2485-276b-4cbd-be51-2436a13d3a4e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.438393916Z" level=info msg="Creating container: default/busybox/busybox" id=50f5e603-711a-4241-a47f-f940e055706f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.438532125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.443647966Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.444190906Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.459499732Z" level=info msg="Created container 00f0a8c6b5d298dc786fde842356f3e5903b36dfc36ce1c192a27cf71d49016b: default/busybox/busybox" id=50f5e603-711a-4241-a47f-f940e055706f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.460271893Z" level=info msg="Starting container: 00f0a8c6b5d298dc786fde842356f3e5903b36dfc36ce1c192a27cf71d49016b" id=9eb64049-47f1-4911-9c9e-29998eb20bc6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:02:58 default-k8s-diff-port-772362 crio[836]: time="2025-11-01T12:02:58.464967199Z" level=info msg="Started container" PID=1806 containerID=00f0a8c6b5d298dc786fde842356f3e5903b36dfc36ce1c192a27cf71d49016b description=default/busybox/busybox id=9eb64049-47f1-4911-9c9e-29998eb20bc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a23c1ded7a0b963c2fe0ba8d56ceed0e99a26e19c6363e438b70ab7ba92a0ca9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	00f0a8c6b5d29       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   a23c1ded7a0b9       busybox                                                default
	91455b4e01dc9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   f8113e59b6bea       coredns-66bc5c9577-czvv4                               kube-system
	937d888bd2ec1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   b1e60e2a4e5db       storage-provisioner                                    kube-system
	7921933d9ad31       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   809857b88ec5f       kube-proxy-7bbw7                                       kube-system
	a86a48ad907bc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   2bf1a50d0bbc2       kindnet-88g26                                          kube-system
	4615135d76987       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   8c7356001cfd6       kube-apiserver-default-k8s-diff-port-772362            kube-system
	c4ffab12ba4db       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   809bd0a150ad5       kube-controller-manager-default-k8s-diff-port-772362   kube-system
	4779f4d4db1c5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   aff7bf0362ec8       etcd-default-k8s-diff-port-772362                      kube-system
	3fac5ce87754f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   bed873be08e32       kube-scheduler-default-k8s-diff-port-772362            kube-system
	
	
	==> coredns [91455b4e01dc9ca88783b20ff3b1aa872d3acb71560213af3e773be5b18916b8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38637 - 19742 "HINFO IN 1222754184705523789.2806407931608909320. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034178952s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-772362
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-772362
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=default-k8s-diff-port-772362
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T12_02_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 12:02:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-772362
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:02:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:02:57 +0000   Sat, 01 Nov 2025 12:01:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:02:57 +0000   Sat, 01 Nov 2025 12:01:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:02:57 +0000   Sat, 01 Nov 2025 12:01:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 12:02:57 +0000   Sat, 01 Nov 2025 12:02:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-772362
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                42af9bdf-2107-489d-bce0-eb773b707372
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-czvv4                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-772362                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-88g26                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-772362             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-772362    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-7bbw7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-772362             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-772362 event: Registered Node default-k8s-diff-port-772362 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-772362 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	[ +52.263508] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:02] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4779f4d4db1c52744cf4f83c37c62763376075d81b3ba42b8faed9191e09c20d] <==
	{"level":"warn","ts":"2025-11-01T12:02:01.570298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.648504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.649444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.665507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.681050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.703335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.713525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.733650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.753117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.792494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.822704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.852497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.874572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.890173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.915042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.928805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.948596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.973279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:01.993066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:02.013950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:02.039033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:02.076719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:02.102618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:02.116313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:02:02.169471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34596","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:03:06 up  3:45,  0 user,  load average: 3.63, 3.73, 3.01
	Linux default-k8s-diff-port-772362 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a86a48ad907bc961d9cfc87b22772269a5fa56a35e03617a34f53ff77d696ec8] <==
	I1101 12:02:12.257097       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:02:12.317851       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 12:02:12.318018       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:02:12.318039       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:02:12.318056       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:02:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:02:12.527465       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:02:12.527593       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:02:12.527751       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:02:12.532242       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 12:02:42.527124       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 12:02:42.528330       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 12:02:42.532862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 12:02:42.546414       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 12:02:44.028656       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 12:02:44.028690       1 metrics.go:72] Registering metrics
	I1101 12:02:44.028755       1 controller.go:711] "Syncing nftables rules"
	I1101 12:02:52.533831       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:02:52.533882       1 main.go:301] handling current node
	I1101 12:03:02.526835       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:03:02.526873       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4615135d76987abecd37e40e8837f8fe3d17203829eacfed8ddc2eb397ea4851] <==
	E1101 12:02:03.231072       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1101 12:02:03.250349       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 12:02:03.258261       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:02:03.295137       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 12:02:03.321772       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:02:03.332866       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 12:02:03.379827       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 12:02:03.851683       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 12:02:03.866839       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 12:02:03.866872       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:02:04.873121       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:02:04.947340       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:02:05.052256       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 12:02:05.076355       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 12:02:05.100834       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 12:02:05.102510       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 12:02:05.110752       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 12:02:06.240287       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:02:06.279986       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 12:02:06.301799       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 12:02:10.807284       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:02:10.818610       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:02:11.006840       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:02:11.123536       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 12:03:04.210912       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:59804: use of closed network connection
	
	
	==> kube-controller-manager [c4ffab12ba4dbd4a2a88e9a5cb25951b9c75328461053587ab47bf0634069c36] <==
	I1101 12:02:10.096747       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:02:10.098458       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 12:02:10.098458       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 12:02:10.099067       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 12:02:10.099808       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 12:02:10.099836       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 12:02:10.099867       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 12:02:10.099967       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:02:10.100307       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 12:02:10.103638       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 12:02:10.106835       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 12:02:10.115351       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 12:02:10.121541       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 12:02:10.121550       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 12:02:10.121778       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 12:02:10.121807       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 12:02:10.121812       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 12:02:10.121819       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 12:02:10.123765       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 12:02:10.123810       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 12:02:10.125002       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 12:02:10.125118       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 12:02:10.127342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 12:02:10.132653       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-772362" podCIDRs=["10.244.0.0/24"]
	I1101 12:02:55.059389       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7921933d9ad3177452f795ee2e795e1fa19dbd9d0d4bacfd63364e978df748f2] <==
	I1101 12:02:12.536698       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:02:12.787724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:02:12.891558       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:02:12.891594       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 12:02:12.891702       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:02:13.004886       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:02:13.004962       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:02:13.016774       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:02:13.017076       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:02:13.017092       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:02:13.018574       1 config.go:200] "Starting service config controller"
	I1101 12:02:13.018586       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:02:13.018604       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:02:13.018611       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:02:13.018621       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:02:13.018625       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:02:13.019225       1 config.go:309] "Starting node config controller"
	I1101 12:02:13.019232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:02:13.019238       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:02:13.121224       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:02:13.121270       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 12:02:13.121321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3fac5ce87754fb2d070931ed983adcbed55f27fb1a3d485cc5dd2d9bc5400adf] <==
	E1101 12:02:03.193561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 12:02:03.193650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 12:02:03.220123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 12:02:03.220233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 12:02:03.220298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 12:02:03.220350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 12:02:03.220404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 12:02:03.220456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 12:02:03.220600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 12:02:03.220651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 12:02:03.220699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 12:02:03.220749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 12:02:03.228926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 12:02:04.031106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 12:02:04.243834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 12:02:04.249893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 12:02:04.330902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 12:02:04.401067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 12:02:04.432931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 12:02:04.434464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 12:02:04.444139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 12:02:04.474355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 12:02:04.474502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 12:02:04.481583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1101 12:02:06.673976       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:02:10 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:10.163287    1320 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 12:02:10 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:10.163935    1320 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:11.309928    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6e30bed5-15e4-4798-96a1-a7baf8f34f3c-cni-cfg\") pod \"kindnet-88g26\" (UID: \"6e30bed5-15e4-4798-96a1-a7baf8f34f3c\") " pod="kube-system/kindnet-88g26"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:11.309975    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e30bed5-15e4-4798-96a1-a7baf8f34f3c-lib-modules\") pod \"kindnet-88g26\" (UID: \"6e30bed5-15e4-4798-96a1-a7baf8f34f3c\") " pod="kube-system/kindnet-88g26"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:11.310008    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e30bed5-15e4-4798-96a1-a7baf8f34f3c-xtables-lock\") pod \"kindnet-88g26\" (UID: \"6e30bed5-15e4-4798-96a1-a7baf8f34f3c\") " pod="kube-system/kindnet-88g26"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:11.310033    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2mng\" (UniqueName: \"kubernetes.io/projected/6e30bed5-15e4-4798-96a1-a7baf8f34f3c-kube-api-access-k2mng\") pod \"kindnet-88g26\" (UID: \"6e30bed5-15e4-4798-96a1-a7baf8f34f3c\") " pod="kube-system/kindnet-88g26"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:11.410368    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f1bbaf5-14a6-4155-898c-a9df5340bafc-kube-proxy\") pod \"kube-proxy-7bbw7\" (UID: \"3f1bbaf5-14a6-4155-898c-a9df5340bafc\") " pod="kube-system/kube-proxy-7bbw7"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:11.410416    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f1bbaf5-14a6-4155-898c-a9df5340bafc-xtables-lock\") pod \"kube-proxy-7bbw7\" (UID: \"3f1bbaf5-14a6-4155-898c-a9df5340bafc\") " pod="kube-system/kube-proxy-7bbw7"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:11.410433    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bttql\" (UniqueName: \"kubernetes.io/projected/3f1bbaf5-14a6-4155-898c-a9df5340bafc-kube-api-access-bttql\") pod \"kube-proxy-7bbw7\" (UID: \"3f1bbaf5-14a6-4155-898c-a9df5340bafc\") " pod="kube-system/kube-proxy-7bbw7"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:11.410494    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f1bbaf5-14a6-4155-898c-a9df5340bafc-lib-modules\") pod \"kube-proxy-7bbw7\" (UID: \"3f1bbaf5-14a6-4155-898c-a9df5340bafc\") " pod="kube-system/kube-proxy-7bbw7"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:11.778077    1320 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 12:02:11 default-k8s-diff-port-772362 kubelet[1320]: W1101 12:02:11.877726    1320 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/crio-2bf1a50d0bbc2805ac88799edd318f3157e5aed5139240c0856413c08891660e WatchSource:0}: Error finding container 2bf1a50d0bbc2805ac88799edd318f3157e5aed5139240c0856413c08891660e: Status 404 returned error can't find the container with id 2bf1a50d0bbc2805ac88799edd318f3157e5aed5139240c0856413c08891660e
	Nov 01 12:02:12 default-k8s-diff-port-772362 kubelet[1320]: W1101 12:02:12.213841    1320 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/crio-809857b88ec5fbd76a10df5622a4bc61cbd3cd6d87fdcb56879f611e1851e42f WatchSource:0}: Error finding container 809857b88ec5fbd76a10df5622a4bc61cbd3cd6d87fdcb56879f611e1851e42f: Status 404 returned error can't find the container with id 809857b88ec5fbd76a10df5622a4bc61cbd3cd6d87fdcb56879f611e1851e42f
	Nov 01 12:02:12 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:12.598233    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7bbw7" podStartSLOduration=1.598213949 podStartE2EDuration="1.598213949s" podCreationTimestamp="2025-11-01 12:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:12.59818066 +0000 UTC m=+6.526564042" watchObservedRunningTime="2025-11-01 12:02:12.598213949 +0000 UTC m=+6.526597339"
	Nov 01 12:02:12 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:12.668590    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-88g26" podStartSLOduration=1.668571321 podStartE2EDuration="1.668571321s" podCreationTimestamp="2025-11-01 12:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:12.668131053 +0000 UTC m=+6.596514451" watchObservedRunningTime="2025-11-01 12:02:12.668571321 +0000 UTC m=+6.596954703"
	Nov 01 12:02:52 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:52.704890    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 12:02:52 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:52.879499    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8e5a477e-257d-4c98-82a6-4339be5e401e-tmp\") pod \"storage-provisioner\" (UID: \"8e5a477e-257d-4c98-82a6-4339be5e401e\") " pod="kube-system/storage-provisioner"
	Nov 01 12:02:52 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:52.879764    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk8w4\" (UniqueName: \"kubernetes.io/projected/8e5a477e-257d-4c98-82a6-4339be5e401e-kube-api-access-nk8w4\") pod \"storage-provisioner\" (UID: \"8e5a477e-257d-4c98-82a6-4339be5e401e\") " pod="kube-system/storage-provisioner"
	Nov 01 12:02:52 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:52.879945    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lwq\" (UniqueName: \"kubernetes.io/projected/0b8370f6-202f-4b70-a478-0186533d331b-kube-api-access-p5lwq\") pod \"coredns-66bc5c9577-czvv4\" (UID: \"0b8370f6-202f-4b70-a478-0186533d331b\") " pod="kube-system/coredns-66bc5c9577-czvv4"
	Nov 01 12:02:52 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:52.880055    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b8370f6-202f-4b70-a478-0186533d331b-config-volume\") pod \"coredns-66bc5c9577-czvv4\" (UID: \"0b8370f6-202f-4b70-a478-0186533d331b\") " pod="kube-system/coredns-66bc5c9577-czvv4"
	Nov 01 12:02:53 default-k8s-diff-port-772362 kubelet[1320]: W1101 12:02:53.142086    1320 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/crio-f8113e59b6bea27442a26b3a287e7f91de51147362e3429f8daf4381f09e2f45 WatchSource:0}: Error finding container f8113e59b6bea27442a26b3a287e7f91de51147362e3429f8daf4381f09e2f45: Status 404 returned error can't find the container with id f8113e59b6bea27442a26b3a287e7f91de51147362e3429f8daf4381f09e2f45
	Nov 01 12:02:53 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:53.702210    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-czvv4" podStartSLOduration=42.702189103 podStartE2EDuration="42.702189103s" podCreationTimestamp="2025-11-01 12:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:53.676443942 +0000 UTC m=+47.604827324" watchObservedRunningTime="2025-11-01 12:02:53.702189103 +0000 UTC m=+47.630572493"
	Nov 01 12:02:53 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:53.736810    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.736788271 podStartE2EDuration="40.736788271s" podCreationTimestamp="2025-11-01 12:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 12:02:53.702803725 +0000 UTC m=+47.631187123" watchObservedRunningTime="2025-11-01 12:02:53.736788271 +0000 UTC m=+47.665171661"
	Nov 01 12:02:56 default-k8s-diff-port-772362 kubelet[1320]: I1101 12:02:56.118551    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mkhb\" (UniqueName: \"kubernetes.io/projected/3b3c8cec-2ef2-493b-987d-c2ebda1abcd9-kube-api-access-6mkhb\") pod \"busybox\" (UID: \"3b3c8cec-2ef2-493b-987d-c2ebda1abcd9\") " pod="default/busybox"
	Nov 01 12:02:56 default-k8s-diff-port-772362 kubelet[1320]: W1101 12:02:56.339642    1320 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/crio-a23c1ded7a0b963c2fe0ba8d56ceed0e99a26e19c6363e438b70ab7ba92a0ca9 WatchSource:0}: Error finding container a23c1ded7a0b963c2fe0ba8d56ceed0e99a26e19c6363e438b70ab7ba92a0ca9: Status 404 returned error can't find the container with id a23c1ded7a0b963c2fe0ba8d56ceed0e99a26e19c6363e438b70ab7ba92a0ca9
	
	
	==> storage-provisioner [937d888bd2ec11e5c7f31185761912bc4e24b8b31102023876adddbcf18c1e5d] <==
	I1101 12:02:53.192740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 12:02:53.222204       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 12:02:53.222346       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 12:02:53.236571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:53.262506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:02:53.262667       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 12:02:53.270150       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f733451d-a420-4621-bd46-168ecef6ff2e", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-772362_714bbbd2-5786-4353-9d9f-32218e5015a9 became leader
	I1101 12:02:53.270658       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772362_714bbbd2-5786-4353-9d9f-32218e5015a9!
	W1101 12:02:53.291186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:53.301923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:02:53.371460       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772362_714bbbd2-5786-4353-9d9f-32218e5015a9!
	W1101 12:02:55.306031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:55.314014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:57.317455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:57.322537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:59.326394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:02:59.332158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:03:01.335261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:03:01.340264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:03:03.344645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:03:03.352568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:03:05.358106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:03:05.363879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-772362 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-915456 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-915456 --alsologtostderr -v=1: exit status 80 (2.136593288s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-915456 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 12:03:13.453051  744761 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:03:13.453191  744761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:03:13.453202  744761 out.go:374] Setting ErrFile to fd 2...
	I1101 12:03:13.453208  744761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:03:13.453490  744761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:03:13.453814  744761 out.go:368] Setting JSON to false
	I1101 12:03:13.453840  744761 mustload.go:66] Loading cluster: newest-cni-915456
	I1101 12:03:13.454243  744761 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:13.454711  744761 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:13.473473  744761 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:13.473921  744761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:03:13.554540  744761 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-01 12:03:13.544089419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:03:13.555213  744761 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-915456 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 12:03:13.559074  744761 out.go:179] * Pausing node newest-cni-915456 ... 
	I1101 12:03:13.562082  744761 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:13.562439  744761 ssh_runner.go:195] Run: systemctl --version
	I1101 12:03:13.562493  744761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:13.583098  744761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:13.688199  744761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:03:13.701360  744761 pause.go:52] kubelet running: true
	I1101 12:03:13.701437  744761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:03:13.984807  744761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:03:13.984894  744761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:03:14.065170  744761 cri.go:89] found id: "4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f"
	I1101 12:03:14.065198  744761 cri.go:89] found id: "5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48"
	I1101 12:03:14.065208  744761 cri.go:89] found id: "e735e98659987111572eec249f828f7621bfaba194220e2c493a43e703434f5e"
	I1101 12:03:14.065212  744761 cri.go:89] found id: "692d04809b9f0753902fb84cccb9fca957c437d518ababe36294a45488b0a1ff"
	I1101 12:03:14.065215  744761 cri.go:89] found id: "e6d473f5be1fd68186a2bdf1e8a283616a64e2e4850a5aede158448888d098b7"
	I1101 12:03:14.065221  744761 cri.go:89] found id: "604ffe25b066ea1ca6f3cb37923272814ecc5129a5eb18e635d4fa3cf43a27e8"
	I1101 12:03:14.065224  744761 cri.go:89] found id: ""
	I1101 12:03:14.065283  744761 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:03:14.077721  744761 retry.go:31] will retry after 165.306157ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:03:14.244083  744761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:03:14.257838  744761 pause.go:52] kubelet running: false
	I1101 12:03:14.257989  744761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:03:14.432965  744761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:03:14.433068  744761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:03:14.520693  744761 cri.go:89] found id: "4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f"
	I1101 12:03:14.520767  744761 cri.go:89] found id: "5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48"
	I1101 12:03:14.520785  744761 cri.go:89] found id: "e735e98659987111572eec249f828f7621bfaba194220e2c493a43e703434f5e"
	I1101 12:03:14.520805  744761 cri.go:89] found id: "692d04809b9f0753902fb84cccb9fca957c437d518ababe36294a45488b0a1ff"
	I1101 12:03:14.520840  744761 cri.go:89] found id: "e6d473f5be1fd68186a2bdf1e8a283616a64e2e4850a5aede158448888d098b7"
	I1101 12:03:14.520863  744761 cri.go:89] found id: "604ffe25b066ea1ca6f3cb37923272814ecc5129a5eb18e635d4fa3cf43a27e8"
	I1101 12:03:14.520883  744761 cri.go:89] found id: ""
	I1101 12:03:14.520961  744761 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:03:14.533383  744761 retry.go:31] will retry after 222.723655ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:03:14.756893  744761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:03:14.770781  744761 pause.go:52] kubelet running: false
	I1101 12:03:14.770849  744761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:03:14.907751  744761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:03:14.907833  744761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:03:14.977641  744761 cri.go:89] found id: "4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f"
	I1101 12:03:14.977738  744761 cri.go:89] found id: "5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48"
	I1101 12:03:14.977759  744761 cri.go:89] found id: "e735e98659987111572eec249f828f7621bfaba194220e2c493a43e703434f5e"
	I1101 12:03:14.977779  744761 cri.go:89] found id: "692d04809b9f0753902fb84cccb9fca957c437d518ababe36294a45488b0a1ff"
	I1101 12:03:14.977813  744761 cri.go:89] found id: "e6d473f5be1fd68186a2bdf1e8a283616a64e2e4850a5aede158448888d098b7"
	I1101 12:03:14.977839  744761 cri.go:89] found id: "604ffe25b066ea1ca6f3cb37923272814ecc5129a5eb18e635d4fa3cf43a27e8"
	I1101 12:03:14.977859  744761 cri.go:89] found id: ""
	I1101 12:03:14.977939  744761 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:03:14.989898  744761 retry.go:31] will retry after 289.659177ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:03:15.280485  744761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:03:15.294214  744761 pause.go:52] kubelet running: false
	I1101 12:03:15.294297  744761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:03:15.431727  744761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:03:15.431812  744761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:03:15.501639  744761 cri.go:89] found id: "4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f"
	I1101 12:03:15.501667  744761 cri.go:89] found id: "5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48"
	I1101 12:03:15.501672  744761 cri.go:89] found id: "e735e98659987111572eec249f828f7621bfaba194220e2c493a43e703434f5e"
	I1101 12:03:15.501677  744761 cri.go:89] found id: "692d04809b9f0753902fb84cccb9fca957c437d518ababe36294a45488b0a1ff"
	I1101 12:03:15.501681  744761 cri.go:89] found id: "e6d473f5be1fd68186a2bdf1e8a283616a64e2e4850a5aede158448888d098b7"
	I1101 12:03:15.501685  744761 cri.go:89] found id: "604ffe25b066ea1ca6f3cb37923272814ecc5129a5eb18e635d4fa3cf43a27e8"
	I1101 12:03:15.501688  744761 cri.go:89] found id: ""
	I1101 12:03:15.501766  744761 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:03:15.517120  744761 out.go:203] 
	W1101 12:03:15.520050  744761 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 12:03:15.520076  744761 out.go:285] * 
	* 
	W1101 12:03:15.528077  744761 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 12:03:15.531026  744761 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-915456 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-915456
helpers_test.go:243: (dbg) docker inspect newest-cni-915456:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e",
	        "Created": "2025-11-01T12:02:18.412307635Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742428,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:02:55.407969567Z",
	            "FinishedAt": "2025-11-01T12:02:54.433138251Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/hostname",
	        "HostsPath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/hosts",
	        "LogPath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e-json.log",
	        "Name": "/newest-cni-915456",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-915456:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-915456",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e",
	                "LowerDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309/merged",
	                "UpperDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309/diff",
	                "WorkDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-915456",
	                "Source": "/var/lib/docker/volumes/newest-cni-915456/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-915456",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-915456",
	                "name.minikube.sigs.k8s.io": "newest-cni-915456",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bedff0ed4ae94c3a6c51a82e91030b7a369a467148fb15d46515c9f90dd8851",
	            "SandboxKey": "/var/run/docker/netns/4bedff0ed4ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-915456": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:c9:9f:77:98:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "10431394969d1cfa6501e0e03a4192e5aff1f9a8f6a90ca624ff65c125c75830",
	                    "EndpointID": "9453f7a26c88cf0f26b810cc660ba1e175628cab1575545d3db24653618620d8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-915456",
	                        "888185dcceae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-915456 -n newest-cni-915456
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-915456 -n newest-cni-915456: exit status 2 (348.475782ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-915456 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-915456 logs -n 25: (1.067382659s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p embed-certs-816860 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-816860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ image   │ no-preload-198717 image list --format=json                                                                                                                                                                                                    │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p disable-driver-mounts-783522                                                                                                                                                                                                               │ disable-driver-mounts-783522 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:02 UTC │
	│ image   │ embed-certs-816860 image list --format=json                                                                                                                                                                                                   │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ pause   │ -p embed-certs-816860 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-915456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ stop    │ -p newest-cni-915456 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-915456 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772362 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ image   │ newest-cni-915456 image list --format=json                                                                                                                                                                                                    │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ pause   │ -p newest-cni-915456 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:02:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:02:55.113345  742300 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:02:55.113469  742300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:55.113480  742300 out.go:374] Setting ErrFile to fd 2...
	I1101 12:02:55.113485  742300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:55.113774  742300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:02:55.114201  742300 out.go:368] Setting JSON to false
	I1101 12:02:55.115168  742300 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13524,"bootTime":1761985051,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:02:55.115242  742300 start.go:143] virtualization:  
	I1101 12:02:55.118763  742300 out.go:179] * [newest-cni-915456] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:02:55.122634  742300 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:02:55.122810  742300 notify.go:221] Checking for updates...
	I1101 12:02:55.128526  742300 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:02:55.131415  742300 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:02:55.134310  742300 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:02:55.137422  742300 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:02:55.140376  742300 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 12:02:55.143676  742300 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:02:55.144287  742300 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:02:55.181842  742300 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:02:55.182023  742300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:02:55.245425  742300 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:02:55.235879687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:02:55.245539  742300 docker.go:319] overlay module found
	I1101 12:02:55.248773  742300 out.go:179] * Using the docker driver based on existing profile
	I1101 12:02:55.251727  742300 start.go:309] selected driver: docker
	I1101 12:02:55.251755  742300 start.go:930] validating driver "docker" against &{Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:02:55.251855  742300 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:02:55.252587  742300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:02:55.321050  742300 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:02:55.301769047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:02:55.321449  742300 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 12:02:55.321486  742300 cni.go:84] Creating CNI manager for ""
	I1101 12:02:55.321544  742300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:02:55.321585  742300 start.go:353] cluster config:
	{Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:02:55.326247  742300 out.go:179] * Starting "newest-cni-915456" primary control-plane node in "newest-cni-915456" cluster
	I1101 12:02:55.329084  742300 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:02:55.331945  742300 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:02:55.334645  742300 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:02:55.334702  742300 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:02:55.334734  742300 cache.go:59] Caching tarball of preloaded images
	I1101 12:02:55.334733  742300 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:02:55.334817  742300 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:02:55.334827  742300 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:02:55.334963  742300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/config.json ...
	I1101 12:02:55.356098  742300 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:02:55.356122  742300 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:02:55.356142  742300 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:02:55.356166  742300 start.go:360] acquireMachinesLock for newest-cni-915456: {Name:mkb1ddd4203c8257583d515453d1119aaa07ce06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:02:55.356242  742300 start.go:364] duration metric: took 54.352µs to acquireMachinesLock for "newest-cni-915456"
	I1101 12:02:55.356263  742300 start.go:96] Skipping create...Using existing machine configuration
	I1101 12:02:55.356272  742300 fix.go:54] fixHost starting: 
	I1101 12:02:55.356543  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:55.374221  742300 fix.go:112] recreateIfNeeded on newest-cni-915456: state=Stopped err=<nil>
	W1101 12:02:55.374254  742300 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 12:02:52.482081  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:52.990050  735220 node_ready.go:49] node "default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:52.990077  735220 node_ready.go:38] duration metric: took 39.512097205s for node "default-k8s-diff-port-772362" to be "Ready" ...
	I1101 12:02:52.990090  735220 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:02:52.990149  735220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:02:53.028559  735220 api_server.go:72] duration metric: took 41.630879044s to wait for apiserver process to appear ...
	I1101 12:02:53.028582  735220 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:02:53.028604  735220 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 12:02:53.044065  735220 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 12:02:53.045253  735220 api_server.go:141] control plane version: v1.34.1
	I1101 12:02:53.045319  735220 api_server.go:131] duration metric: took 16.728708ms to wait for apiserver health ...
	I1101 12:02:53.045341  735220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:02:53.048782  735220 system_pods.go:59] 8 kube-system pods found
	I1101 12:02:53.048816  735220 system_pods.go:61] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.048823  735220 system_pods.go:61] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.048829  735220 system_pods.go:61] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.048834  735220 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.048839  735220 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.048844  735220 system_pods.go:61] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.048848  735220 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.048855  735220 system_pods.go:61] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.048861  735220 system_pods.go:74] duration metric: took 3.500665ms to wait for pod list to return data ...
	I1101 12:02:53.048869  735220 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:02:53.052663  735220 default_sa.go:45] found service account: "default"
	I1101 12:02:53.052712  735220 default_sa.go:55] duration metric: took 3.825305ms for default service account to be created ...
	I1101 12:02:53.052734  735220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 12:02:53.059175  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.059265  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.059289  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.059328  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.059358  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.059378  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.059416  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.059437  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.059469  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.059528  735220 retry.go:31] will retry after 214.6601ms: missing components: kube-dns
	I1101 12:02:53.280714  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.280745  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.280752  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.280758  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.280762  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.280767  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.280770  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.280775  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.280782  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.280796  735220 retry.go:31] will retry after 322.159037ms: missing components: kube-dns
	I1101 12:02:53.606672  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.606705  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.606712  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.606719  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.606723  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.606727  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.606733  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.606737  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.606745  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.606760  735220 retry.go:31] will retry after 316.945096ms: missing components: kube-dns
	I1101 12:02:53.934099  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.934231  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Running
	I1101 12:02:53.934287  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.934323  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.934352  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.934399  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.934430  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.934481  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.934529  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Running
	I1101 12:02:53.934562  735220 system_pods.go:126] duration metric: took 881.815622ms to wait for k8s-apps to be running ...
	I1101 12:02:53.934623  735220 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 12:02:53.934747  735220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:02:53.957397  735220 system_svc.go:56] duration metric: took 22.771609ms WaitForService to wait for kubelet
	I1101 12:02:53.957424  735220 kubeadm.go:587] duration metric: took 42.559749043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:02:53.957450  735220 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:02:53.961797  735220 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:02:53.961836  735220 node_conditions.go:123] node cpu capacity is 2
	I1101 12:02:53.961856  735220 node_conditions.go:105] duration metric: took 4.400048ms to run NodePressure ...
	I1101 12:02:53.961869  735220 start.go:242] waiting for startup goroutines ...
	I1101 12:02:53.961877  735220 start.go:247] waiting for cluster config update ...
	I1101 12:02:53.961888  735220 start.go:256] writing updated cluster config ...
	I1101 12:02:53.962236  735220 ssh_runner.go:195] Run: rm -f paused
	I1101 12:02:53.967844  735220 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:02:53.972134  735220 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.976963  735220 pod_ready.go:94] pod "coredns-66bc5c9577-czvv4" is "Ready"
	I1101 12:02:53.976989  735220 pod_ready.go:86] duration metric: took 4.827936ms for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.979442  735220 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.984113  735220 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:53.984143  735220 pod_ready.go:86] duration metric: took 4.67476ms for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.986754  735220 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.991981  735220 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:53.992020  735220 pod_ready.go:86] duration metric: took 5.236917ms for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.995095  735220 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:54.371944  735220 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:54.371973  735220 pod_ready.go:86] duration metric: took 376.850795ms for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:54.571896  735220 pod_ready.go:83] waiting for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:54.972543  735220 pod_ready.go:94] pod "kube-proxy-7bbw7" is "Ready"
	I1101 12:02:54.972566  735220 pod_ready.go:86] duration metric: took 400.644169ms for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:55.175953  735220 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:55.572218  735220 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:55.572251  735220 pod_ready.go:86] duration metric: took 396.270148ms for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:55.572267  735220 pod_ready.go:40] duration metric: took 1.604376454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:02:55.645071  735220 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:02:55.648372  735220 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772362" cluster and "default" namespace by default
	I1101 12:02:55.377713  742300 out.go:252] * Restarting existing docker container for "newest-cni-915456" ...
	I1101 12:02:55.377805  742300 cli_runner.go:164] Run: docker start newest-cni-915456
	I1101 12:02:55.668898  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:55.719097  742300 kic.go:430] container "newest-cni-915456" state is running.
	I1101 12:02:55.719477  742300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:02:55.745862  742300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/config.json ...
	I1101 12:02:55.746093  742300 machine.go:94] provisionDockerMachine start ...
	I1101 12:02:55.746155  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:55.767463  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:55.767776  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:02:55.767788  742300 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:02:55.768440  742300 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 12:02:58.921386  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-915456
	
	I1101 12:02:58.921414  742300 ubuntu.go:182] provisioning hostname "newest-cni-915456"
	I1101 12:02:58.921479  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:58.939538  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:58.939853  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:02:58.939871  742300 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-915456 && echo "newest-cni-915456" | sudo tee /etc/hostname
	I1101 12:02:59.103258  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-915456
	
	I1101 12:02:59.103352  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:59.127529  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:59.127845  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:02:59.127867  742300 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-915456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-915456/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-915456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:02:59.278130  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:02:59.278154  742300 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:02:59.278183  742300 ubuntu.go:190] setting up certificates
	I1101 12:02:59.278199  742300 provision.go:84] configureAuth start
	I1101 12:02:59.278259  742300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:02:59.297213  742300 provision.go:143] copyHostCerts
	I1101 12:02:59.297292  742300 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:02:59.297311  742300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:02:59.297408  742300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:02:59.297580  742300 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:02:59.297592  742300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:02:59.297633  742300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:02:59.297770  742300 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:02:59.297781  742300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:02:59.297829  742300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:02:59.297916  742300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.newest-cni-915456 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-915456]
	I1101 12:02:59.896330  742300 provision.go:177] copyRemoteCerts
	I1101 12:02:59.896440  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:02:59.896515  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:59.914686  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:00.063883  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 12:03:00.108301  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:03:00.178457  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 12:03:00.248735  742300 provision.go:87] duration metric: took 970.518964ms to configureAuth
	I1101 12:03:00.248762  742300 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:03:00.248991  742300 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:00.249155  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:00.329441  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:00.329886  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:03:00.329905  742300 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:03:00.698138  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:03:00.698165  742300 machine.go:97] duration metric: took 4.952059354s to provisionDockerMachine
	I1101 12:03:00.698177  742300 start.go:293] postStartSetup for "newest-cni-915456" (driver="docker")
	I1101 12:03:00.698188  742300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:03:00.698251  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:03:00.698313  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:00.719770  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:00.826001  742300 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:03:00.829465  742300 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:03:00.829497  742300 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:03:00.829510  742300 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:03:00.829565  742300 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:03:00.829646  742300 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:03:00.829793  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:03:00.837903  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:00.858360  742300 start.go:296] duration metric: took 160.167322ms for postStartSetup
	I1101 12:03:00.858469  742300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:03:00.858520  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:00.875867  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:00.978729  742300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:03:00.984107  742300 fix.go:56] duration metric: took 5.627820441s for fixHost
	I1101 12:03:00.984142  742300 start.go:83] releasing machines lock for "newest-cni-915456", held for 5.627888282s
	I1101 12:03:00.984222  742300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:03:01.003035  742300 ssh_runner.go:195] Run: cat /version.json
	I1101 12:03:01.003103  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:01.003223  742300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:03:01.003279  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:01.024454  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:01.031352  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:01.125655  742300 ssh_runner.go:195] Run: systemctl --version
	I1101 12:03:01.223022  742300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:03:01.265485  742300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:03:01.270236  742300 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:03:01.270311  742300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:03:01.278980  742300 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 12:03:01.279004  742300 start.go:496] detecting cgroup driver to use...
	I1101 12:03:01.279038  742300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:03:01.279087  742300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:03:01.295098  742300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:03:01.308561  742300 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:03:01.308628  742300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:03:01.325162  742300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:03:01.344433  742300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:03:01.466028  742300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:03:01.590309  742300 docker.go:234] disabling docker service ...
	I1101 12:03:01.590428  742300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:03:01.606044  742300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:03:01.619039  742300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:03:01.737193  742300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:03:01.857808  742300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:03:01.870765  742300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:03:01.884834  742300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:03:01.884944  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.894394  742300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:03:01.894470  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.903381  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.912372  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.921179  742300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:03:01.929237  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.938202  742300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.946628  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.955746  742300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:03:01.963390  742300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:03:01.970762  742300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:02.090551  742300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:03:02.230555  742300 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:03:02.230684  742300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:03:02.234641  742300 start.go:564] Will wait 60s for crictl version
	I1101 12:03:02.234757  742300 ssh_runner.go:195] Run: which crictl
	I1101 12:03:02.238409  742300 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:03:02.263419  742300 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:03:02.263515  742300 ssh_runner.go:195] Run: crio --version
	I1101 12:03:02.292976  742300 ssh_runner.go:195] Run: crio --version
	I1101 12:03:02.326610  742300 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:03:02.329498  742300 cli_runner.go:164] Run: docker network inspect newest-cni-915456 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:03:02.346289  742300 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 12:03:02.350263  742300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:02.363268  742300 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 12:03:02.365999  742300 kubeadm.go:884] updating cluster {Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:03:02.366147  742300 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:02.366226  742300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:02.399320  742300 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:02.399343  742300 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:03:02.399409  742300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:02.426206  742300 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:02.426231  742300 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:03:02.426240  742300 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 12:03:02.426341  742300 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-915456 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:03:02.426430  742300 ssh_runner.go:195] Run: crio config
	I1101 12:03:02.511464  742300 cni.go:84] Creating CNI manager for ""
	I1101 12:03:02.511488  742300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:03:02.511511  742300 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 12:03:02.511536  742300 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-915456 NodeName:newest-cni-915456 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:03:02.511679  742300 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-915456"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:03:02.511758  742300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:03:02.520041  742300 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:03:02.520131  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:03:02.527930  742300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 12:03:02.541649  742300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:03:02.563865  742300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 12:03:02.578660  742300 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:03:02.582471  742300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:02.592964  742300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:02.703925  742300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:02.721362  742300 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456 for IP: 192.168.76.2
	I1101 12:03:02.721393  742300 certs.go:195] generating shared ca certs ...
	I1101 12:03:02.721410  742300 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:02.721578  742300 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:03:02.721637  742300 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:03:02.721650  742300 certs.go:257] generating profile certs ...
	I1101 12:03:02.721812  742300 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/client.key
	I1101 12:03:02.721891  742300 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key.4fb12c14
	I1101 12:03:02.721956  742300 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key
	I1101 12:03:02.722081  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:03:02.722123  742300 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:03:02.722138  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:03:02.722165  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:03:02.722202  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:03:02.722231  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:03:02.722286  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:02.722946  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:03:02.742888  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:03:02.759545  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:03:02.776109  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:03:02.799826  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 12:03:02.827576  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:03:02.849554  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:03:02.875572  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 12:03:02.905654  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:03:02.930783  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:03:02.950391  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:03:02.971052  742300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:03:02.985185  742300 ssh_runner.go:195] Run: openssl version
	I1101 12:03:02.991714  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:03:03.000341  742300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:03:03.006664  742300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:03:03.006759  742300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:03:03.058297  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:03:03.067007  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:03:03.075850  742300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:03.079839  742300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:03.079905  742300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:03.120911  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:03:03.128883  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:03:03.138026  742300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:03:03.141876  742300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:03:03.141955  742300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:03:03.182969  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:03:03.191093  742300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:03:03.194870  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 12:03:03.236383  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 12:03:03.277943  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 12:03:03.320120  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 12:03:03.380228  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 12:03:03.465231  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 12:03:03.518647  742300 kubeadm.go:401] StartCluster: {Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:03:03.518771  742300 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:03:03.518886  742300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:03:03.620358  742300 cri.go:89] found id: "e735e98659987111572eec249f828f7621bfaba194220e2c493a43e703434f5e"
	I1101 12:03:03.620405  742300 cri.go:89] found id: "692d04809b9f0753902fb84cccb9fca957c437d518ababe36294a45488b0a1ff"
	I1101 12:03:03.620427  742300 cri.go:89] found id: "e6d473f5be1fd68186a2bdf1e8a283616a64e2e4850a5aede158448888d098b7"
	I1101 12:03:03.620438  742300 cri.go:89] found id: "604ffe25b066ea1ca6f3cb37923272814ecc5129a5eb18e635d4fa3cf43a27e8"
	I1101 12:03:03.620442  742300 cri.go:89] found id: ""
	I1101 12:03:03.620509  742300 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 12:03:03.637251  742300 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:03Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:03:03.637365  742300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:03:03.649760  742300 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 12:03:03.649795  742300 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 12:03:03.649885  742300 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 12:03:03.659593  742300 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 12:03:03.660244  742300 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-915456" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:03.660631  742300 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-915456" cluster setting kubeconfig missing "newest-cni-915456" context setting]
	I1101 12:03:03.661183  742300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:03.662931  742300 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 12:03:03.675027  742300 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 12:03:03.675061  742300 kubeadm.go:602] duration metric: took 25.25942ms to restartPrimaryControlPlane
	I1101 12:03:03.675071  742300 kubeadm.go:403] duration metric: took 156.440898ms to StartCluster
	I1101 12:03:03.675120  742300 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:03.675201  742300 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:03.676207  742300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:03.676486  742300 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:03:03.676885  742300 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:03.676967  742300 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:03:03.677155  742300 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-915456"
	I1101 12:03:03.677200  742300 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-915456"
	W1101 12:03:03.677223  742300 addons.go:248] addon storage-provisioner should already be in state true
	I1101 12:03:03.677275  742300 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:03.677840  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.681616  742300 addons.go:70] Setting default-storageclass=true in profile "newest-cni-915456"
	I1101 12:03:03.681673  742300 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-915456"
	I1101 12:03:03.681770  742300 addons.go:70] Setting dashboard=true in profile "newest-cni-915456"
	I1101 12:03:03.681809  742300 addons.go:239] Setting addon dashboard=true in "newest-cni-915456"
	W1101 12:03:03.681822  742300 addons.go:248] addon dashboard should already be in state true
	I1101 12:03:03.681851  742300 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:03.682088  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.682371  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.684681  742300 out.go:179] * Verifying Kubernetes components...
	I1101 12:03:03.688181  742300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:03.734816  742300 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 12:03:03.740926  742300 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 12:03:03.741988  742300 addons.go:239] Setting addon default-storageclass=true in "newest-cni-915456"
	W1101 12:03:03.742006  742300 addons.go:248] addon default-storageclass should already be in state true
	I1101 12:03:03.742032  742300 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:03.742524  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.745820  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 12:03:03.745840  742300 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 12:03:03.745900  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:03.748010  742300 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 12:03:03.751678  742300 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:03.751698  742300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:03:03.751761  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:03.789889  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:03.813761  742300 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:03.813791  742300 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:03:03.813864  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:03.815245  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:03.847186  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:04.043475  742300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:04.104972  742300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:04.124695  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 12:03:04.124717  742300 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 12:03:04.223345  742300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:04.320131  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 12:03:04.320153  742300 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 12:03:04.424610  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 12:03:04.424640  742300 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 12:03:04.542767  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 12:03:04.542790  742300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 12:03:04.615073  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 12:03:04.615101  742300 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 12:03:04.679410  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 12:03:04.679440  742300 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 12:03:04.726161  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 12:03:04.726182  742300 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 12:03:04.764773  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 12:03:04.764794  742300 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 12:03:04.809930  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:03:04.809956  742300 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 12:03:04.839568  742300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:03:12.023934  742300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.980424779s)
	I1101 12:03:12.023982  742300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.918991517s)
	I1101 12:03:12.024297  742300 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.800925868s)
	I1101 12:03:12.024328  742300 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:03:12.024421  742300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:03:12.074521  742300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.234904447s)
	I1101 12:03:12.076871  742300 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-915456 addons enable metrics-server
	
	I1101 12:03:12.078700  742300 api_server.go:72] duration metric: took 8.402177817s to wait for apiserver process to appear ...
	I1101 12:03:12.078789  742300 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:03:12.078823  742300 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 12:03:12.083439  742300 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 12:03:12.086228  742300 addons.go:515] duration metric: took 8.409251154s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 12:03:12.091144  742300 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 12:03:12.091167  742300 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 12:03:12.579801  742300 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 12:03:12.589139  742300 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 12:03:12.590331  742300 api_server.go:141] control plane version: v1.34.1
	I1101 12:03:12.590353  742300 api_server.go:131] duration metric: took 511.543844ms to wait for apiserver health ...
	I1101 12:03:12.590363  742300 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:03:12.596255  742300 system_pods.go:59] 8 kube-system pods found
	I1101 12:03:12.596339  742300 system_pods.go:61] "coredns-66bc5c9577-fwd4w" [18c6c47e-3e00-4794-887a-a05b3478a545] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 12:03:12.596365  742300 system_pods.go:61] "etcd-newest-cni-915456" [c1377a6a-0f63-41c5-94d9-1c1bcf7c0049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:03:12.596404  742300 system_pods.go:61] "kindnet-xtbw2" [f91412bc-141d-4706-a3b4-f173a4a731a3] Running
	I1101 12:03:12.596432  742300 system_pods.go:61] "kube-apiserver-newest-cni-915456" [86dc9fc3-c717-40db-b0cf-633dfdb0ea87] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:03:12.596470  742300 system_pods.go:61] "kube-controller-manager-newest-cni-915456" [cef60eef-ea38-49dc-b7e8-219972759c49] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:03:12.596504  742300 system_pods.go:61] "kube-proxy-4cxmx" [bf13f387-a80a-4910-8fef-45c3ace6b6c8] Running
	I1101 12:03:12.596528  742300 system_pods.go:61] "kube-scheduler-newest-cni-915456" [8d027bf3-40c5-4f8f-92f3-ae047cd94a2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:03:12.596547  742300 system_pods.go:61] "storage-provisioner" [693b39e3-8e8a-4380-8304-7513694bb16c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 12:03:12.596571  742300 system_pods.go:74] duration metric: took 6.202608ms to wait for pod list to return data ...
	I1101 12:03:12.596603  742300 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:03:12.599423  742300 default_sa.go:45] found service account: "default"
	I1101 12:03:12.599488  742300 default_sa.go:55] duration metric: took 2.862502ms for default service account to be created ...
	I1101 12:03:12.599515  742300 kubeadm.go:587] duration metric: took 8.922997931s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 12:03:12.599563  742300 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:03:12.604191  742300 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:03:12.604273  742300 node_conditions.go:123] node cpu capacity is 2
	I1101 12:03:12.604300  742300 node_conditions.go:105] duration metric: took 4.713226ms to run NodePressure ...
	I1101 12:03:12.604327  742300 start.go:242] waiting for startup goroutines ...
	I1101 12:03:12.604364  742300 start.go:247] waiting for cluster config update ...
	I1101 12:03:12.604399  742300 start.go:256] writing updated cluster config ...
	I1101 12:03:12.604714  742300 ssh_runner.go:195] Run: rm -f paused
	I1101 12:03:12.688369  742300 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:03:12.693744  742300 out.go:179] * Done! kubectl is now configured to use "newest-cni-915456" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.158661251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.165060948Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-4cxmx/POD" id=3f12692e-3582-4e33-9673-62b6e88f422b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.165136822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.18164481Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3f12692e-3582-4e33-9673-62b6e88f422b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.1899714Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0cdb5f8c-c84c-4c9c-a913-1aaa387c08ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.213133594Z" level=info msg="Ran pod sandbox c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6 with infra container: kube-system/kube-proxy-4cxmx/POD" id=3f12692e-3582-4e33-9673-62b6e88f422b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.213200746Z" level=info msg="Ran pod sandbox 2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71 with infra container: kube-system/kindnet-xtbw2/POD" id=0cdb5f8c-c84c-4c9c-a913-1aaa387c08ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.223626522Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fd4e9dab-bae9-4ad7-a178-a17a6c3075cf name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.224566243Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=237d8561-7036-4885-b1a3-5e6c57e41ea6 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.224629965Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9929ea59-2ee0-4e5d-8949-a23e594a3713 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.22637772Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2fcb3db7-f187-4c5c-bcc6-96f0b67e8e0d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.232592004Z" level=info msg="Creating container: kube-system/kube-proxy-4cxmx/kube-proxy" id=1eb70559-0f92-4590-a9cf-0e28910523da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.232704809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.234657393Z" level=info msg="Creating container: kube-system/kindnet-xtbw2/kindnet-cni" id=e77016d9-ff9e-4b6d-bed2-63842f23ef02 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.234943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.253729746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.254976154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.279981384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.289928663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.324984Z" level=info msg="Created container 4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f: kube-system/kindnet-xtbw2/kindnet-cni" id=e77016d9-ff9e-4b6d-bed2-63842f23ef02 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.327512927Z" level=info msg="Starting container: 4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f" id=cd7b7a95-eef6-41a4-9ab2-1efab67951b8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.338122336Z" level=info msg="Started container" PID=1056 containerID=4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f description=kube-system/kindnet-xtbw2/kindnet-cni id=cd7b7a95-eef6-41a4-9ab2-1efab67951b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.417403574Z" level=info msg="Created container 5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48: kube-system/kube-proxy-4cxmx/kube-proxy" id=1eb70559-0f92-4590-a9cf-0e28910523da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.419491823Z" level=info msg="Starting container: 5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48" id=94a37a87-20bd-4f74-b675-02ce1e6a5c72 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.422782507Z" level=info msg="Started container" PID=1060 containerID=5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48 description=kube-system/kube-proxy-4cxmx/kube-proxy id=94a37a87-20bd-4f74-b675-02ce1e6a5c72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4cf3cdf4a4145       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   2aa1b10678ba2       kindnet-xtbw2                               kube-system
	5bd6f90684439       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   c07fb4a269dfc       kube-proxy-4cxmx                            kube-system
	e735e98659987       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   07aeb8b2143d6       kube-scheduler-newest-cni-915456            kube-system
	692d04809b9f0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   036fcfd8da859       kube-apiserver-newest-cni-915456            kube-system
	e6d473f5be1fd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   7d29c18894f65       kube-controller-manager-newest-cni-915456   kube-system
	604ffe25b066e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   5b59338dd8b2d       etcd-newest-cni-915456                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-915456
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-915456
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=newest-cni-915456
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T12_02_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 12:02:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-915456
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:03:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:03:10 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:03:10 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:03:10 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 12:03:10 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-915456
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0c03d2de-2716-4951-b7fa-b9e1f188afd7
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-915456                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-xtbw2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-915456             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-915456    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-4cxmx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-915456             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-915456 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-915456 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-915456 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-915456 event: Registered Node newest-cni-915456 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 14s)  kubelet          Node newest-cni-915456 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 14s)  kubelet          Node newest-cni-915456 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 14s)  kubelet          Node newest-cni-915456 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-915456 event: Registered Node newest-cni-915456 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	[ +52.263508] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:02] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [604ffe25b066ea1ca6f3cb37923272814ecc5129a5eb18e635d4fa3cf43a27e8] <==
	{"level":"warn","ts":"2025-11-01T12:03:08.782231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.810889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.827846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.866146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.874623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.884649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.906954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.923986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.965219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.983723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.998795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.014717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.030181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.048403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.065948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.090889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.106188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.143830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.146057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.163922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.186274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.209601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.226515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.246716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.361115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54842","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:03:16 up  3:45,  0 user,  load average: 4.08, 3.83, 3.05
	Linux newest-cni-915456 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f] <==
	I1101 12:03:11.440209       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:03:11.522158       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 12:03:11.522285       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:03:11.522299       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:03:11.522314       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:03:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:03:11.718018       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:03:11.720055       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:03:11.720902       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:03:11.721082       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [692d04809b9f0753902fb84cccb9fca957c437d518ababe36294a45488b0a1ff] <==
	I1101 12:03:10.640409       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 12:03:10.640417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 12:03:10.640424       1 cache.go:39] Caches are synced for autoregister controller
	I1101 12:03:10.651577       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1101 12:03:10.677577       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 12:03:10.685783       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 12:03:10.685817       1 policy_source.go:240] refreshing policies
	I1101 12:03:10.694897       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 12:03:10.695949       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 12:03:10.695991       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 12:03:10.696044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 12:03:10.705101       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 12:03:10.724412       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 12:03:10.764258       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 12:03:10.968981       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 12:03:11.230039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:03:11.508498       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 12:03:11.741491       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:03:11.818204       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:03:11.856680       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:03:12.029019       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.41.60"}
	I1101 12:03:12.066073       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.98.234"}
	I1101 12:03:13.968503       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:03:14.267632       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 12:03:14.322426       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e6d473f5be1fd68186a2bdf1e8a283616a64e2e4850a5aede158448888d098b7] <==
	I1101 12:03:13.925971       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 12:03:13.926010       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 12:03:13.926016       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 12:03:13.926021       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 12:03:13.928297       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 12:03:13.932436       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 12:03:13.934638       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 12:03:13.942322       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 12:03:13.958955       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 12:03:13.959611       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 12:03:13.959768       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 12:03:13.959823       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 12:03:13.965810       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 12:03:13.965897       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 12:03:13.965976       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 12:03:13.966000       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 12:03:13.965979       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 12:03:13.966117       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:03:13.966107       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 12:03:13.965990       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 12:03:13.971862       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 12:03:13.972934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:03:13.973012       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 12:03:13.973022       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 12:03:13.977512       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48] <==
	I1101 12:03:12.008811       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:03:12.196510       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:03:12.296736       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:03:12.296769       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 12:03:12.296856       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:03:12.323259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:03:12.323315       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:03:12.327308       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:03:12.327655       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:03:12.327678       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:03:12.328810       1 config.go:200] "Starting service config controller"
	I1101 12:03:12.328828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:03:12.332247       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:03:12.332322       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:03:12.332362       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:03:12.332388       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:03:12.334600       1 config.go:309] "Starting node config controller"
	I1101 12:03:12.334708       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:03:12.334740       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:03:12.428984       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 12:03:12.433330       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:03:12.433336       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e735e98659987111572eec249f828f7621bfaba194220e2c493a43e703434f5e] <==
	I1101 12:03:08.822045       1 serving.go:386] Generated self-signed cert in-memory
	I1101 12:03:12.187487       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 12:03:12.191452       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:03:12.202682       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 12:03:12.202837       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 12:03:12.202896       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 12:03:12.202968       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 12:03:12.204819       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:12.205357       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:12.210463       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:12.204996       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:12.214004       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:12.302999       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 12:03:12.319075       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.439008     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: E1101 12:03:10.812836     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-915456\" already exists" pod="kube-system/kube-controller-manager-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.812872     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.818036     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.818136     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.818168     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.819436     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.845452     726 apiserver.go:52] "Watching apiserver"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: E1101 12:03:10.865808     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-915456\" already exists" pod="kube-system/kube-scheduler-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.865846     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: E1101 12:03:10.904123     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-915456\" already exists" pod="kube-system/etcd-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.904155     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: E1101 12:03:10.933805     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-915456\" already exists" pod="kube-system/kube-apiserver-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.944666     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.944786     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf13f387-a80a-4910-8fef-45c3ace6b6c8-lib-modules\") pod \"kube-proxy-4cxmx\" (UID: \"bf13f387-a80a-4910-8fef-45c3ace6b6c8\") " pod="kube-system/kube-proxy-4cxmx"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.944812     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf13f387-a80a-4910-8fef-45c3ace6b6c8-xtables-lock\") pod \"kube-proxy-4cxmx\" (UID: \"bf13f387-a80a-4910-8fef-45c3ace6b6c8\") " pod="kube-system/kube-proxy-4cxmx"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.945520     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f91412bc-141d-4706-a3b4-f173a4a731a3-cni-cfg\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.945569     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f91412bc-141d-4706-a3b4-f173a4a731a3-xtables-lock\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.945587     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f91412bc-141d-4706-a3b4-f173a4a731a3-lib-modules\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.986206     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 12:03:11 newest-cni-915456 kubelet[726]: W1101 12:03:11.204950     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/crio-c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6 WatchSource:0}: Error finding container c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6: Status 404 returned error can't find the container with id c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6
	Nov 01 12:03:11 newest-cni-915456 kubelet[726]: W1101 12:03:11.206144     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/crio-2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71 WatchSource:0}: Error finding container 2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71: Status 404 returned error can't find the container with id 2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71
	Nov 01 12:03:13 newest-cni-915456 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 12:03:13 newest-cni-915456 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 12:03:13 newest-cni-915456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-915456 -n newest-cni-915456
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-915456 -n newest-cni-915456: exit status 2 (364.24694ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-915456 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-fwd4w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lphz7 kubernetes-dashboard-855c9754f9-gqnkg
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-915456 describe pod coredns-66bc5c9577-fwd4w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lphz7 kubernetes-dashboard-855c9754f9-gqnkg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-915456 describe pod coredns-66bc5c9577-fwd4w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lphz7 kubernetes-dashboard-855c9754f9-gqnkg: exit status 1 (86.071652ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-fwd4w" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-lphz7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gqnkg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-915456 describe pod coredns-66bc5c9577-fwd4w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lphz7 kubernetes-dashboard-855c9754f9-gqnkg: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-915456
helpers_test.go:243: (dbg) docker inspect newest-cni-915456:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e",
	        "Created": "2025-11-01T12:02:18.412307635Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742428,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:02:55.407969567Z",
	            "FinishedAt": "2025-11-01T12:02:54.433138251Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/hostname",
	        "HostsPath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/hosts",
	        "LogPath": "/var/lib/docker/containers/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e-json.log",
	        "Name": "/newest-cni-915456",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-915456:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-915456",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e",
	                "LowerDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309/merged",
	                "UpperDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309/diff",
	                "WorkDir": "/var/lib/docker/overlay2/40c2fdf77ffab94c5db65cd931ceb5724cb933b4f014761aa24849beb5580309/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-915456",
	                "Source": "/var/lib/docker/volumes/newest-cni-915456/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-915456",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-915456",
	                "name.minikube.sigs.k8s.io": "newest-cni-915456",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bedff0ed4ae94c3a6c51a82e91030b7a369a467148fb15d46515c9f90dd8851",
	            "SandboxKey": "/var/run/docker/netns/4bedff0ed4ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-915456": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:c9:9f:77:98:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "10431394969d1cfa6501e0e03a4192e5aff1f9a8f6a90ca624ff65c125c75830",
	                    "EndpointID": "9453f7a26c88cf0f26b810cc660ba1e175628cab1575545d3db24653618620d8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-915456",
	                        "888185dcceae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-915456 -n newest-cni-915456
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-915456 -n newest-cni-915456: exit status 2 (336.411192ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-915456 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-915456 logs -n 25: (1.085624643s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-816860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │                     │
	│ stop    │ -p embed-certs-816860 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-816860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:00 UTC │
	│ start   │ -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:00 UTC │ 01 Nov 25 12:01 UTC │
	│ image   │ no-preload-198717 image list --format=json                                                                                                                                                                                                    │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p disable-driver-mounts-783522                                                                                                                                                                                                               │ disable-driver-mounts-783522 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:02 UTC │
	│ image   │ embed-certs-816860 image list --format=json                                                                                                                                                                                                   │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ pause   │ -p embed-certs-816860 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-915456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ stop    │ -p newest-cni-915456 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-915456 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772362 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ image   │ newest-cni-915456 image list --format=json                                                                                                                                                                                                    │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ pause   │ -p newest-cni-915456 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:02:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:02:55.113345  742300 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:02:55.113469  742300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:55.113480  742300 out.go:374] Setting ErrFile to fd 2...
	I1101 12:02:55.113485  742300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:02:55.113774  742300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:02:55.114201  742300 out.go:368] Setting JSON to false
	I1101 12:02:55.115168  742300 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13524,"bootTime":1761985051,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:02:55.115242  742300 start.go:143] virtualization:  
	I1101 12:02:55.118763  742300 out.go:179] * [newest-cni-915456] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:02:55.122634  742300 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:02:55.122810  742300 notify.go:221] Checking for updates...
	I1101 12:02:55.128526  742300 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:02:55.131415  742300 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:02:55.134310  742300 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:02:55.137422  742300 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:02:55.140376  742300 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 12:02:55.143676  742300 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:02:55.144287  742300 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:02:55.181842  742300 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:02:55.182023  742300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:02:55.245425  742300 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:02:55.235879687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:02:55.245539  742300 docker.go:319] overlay module found
	I1101 12:02:55.248773  742300 out.go:179] * Using the docker driver based on existing profile
	I1101 12:02:55.251727  742300 start.go:309] selected driver: docker
	I1101 12:02:55.251755  742300 start.go:930] validating driver "docker" against &{Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:02:55.251855  742300 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:02:55.252587  742300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:02:55.321050  742300 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 12:02:55.301769047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:02:55.321449  742300 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 12:02:55.321486  742300 cni.go:84] Creating CNI manager for ""
	I1101 12:02:55.321544  742300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:02:55.321585  742300 start.go:353] cluster config:
	{Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:02:55.326247  742300 out.go:179] * Starting "newest-cni-915456" primary control-plane node in "newest-cni-915456" cluster
	I1101 12:02:55.329084  742300 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:02:55.331945  742300 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:02:55.334645  742300 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:02:55.334702  742300 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:02:55.334734  742300 cache.go:59] Caching tarball of preloaded images
	I1101 12:02:55.334733  742300 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:02:55.334817  742300 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:02:55.334827  742300 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:02:55.334963  742300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/config.json ...
	I1101 12:02:55.356098  742300 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:02:55.356122  742300 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:02:55.356142  742300 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:02:55.356166  742300 start.go:360] acquireMachinesLock for newest-cni-915456: {Name:mkb1ddd4203c8257583d515453d1119aaa07ce06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:02:55.356242  742300 start.go:364] duration metric: took 54.352µs to acquireMachinesLock for "newest-cni-915456"
	I1101 12:02:55.356263  742300 start.go:96] Skipping create...Using existing machine configuration
	I1101 12:02:55.356272  742300 fix.go:54] fixHost starting: 
	I1101 12:02:55.356543  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:55.374221  742300 fix.go:112] recreateIfNeeded on newest-cni-915456: state=Stopped err=<nil>
	W1101 12:02:55.374254  742300 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 12:02:52.482081  735220 node_ready.go:57] node "default-k8s-diff-port-772362" has "Ready":"False" status (will retry)
	I1101 12:02:52.990050  735220 node_ready.go:49] node "default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:52.990077  735220 node_ready.go:38] duration metric: took 39.512097205s for node "default-k8s-diff-port-772362" to be "Ready" ...
	I1101 12:02:52.990090  735220 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:02:52.990149  735220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:02:53.028559  735220 api_server.go:72] duration metric: took 41.630879044s to wait for apiserver process to appear ...
	I1101 12:02:53.028582  735220 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:02:53.028604  735220 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 12:02:53.044065  735220 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 12:02:53.045253  735220 api_server.go:141] control plane version: v1.34.1
	I1101 12:02:53.045319  735220 api_server.go:131] duration metric: took 16.728708ms to wait for apiserver health ...
	I1101 12:02:53.045341  735220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:02:53.048782  735220 system_pods.go:59] 8 kube-system pods found
	I1101 12:02:53.048816  735220 system_pods.go:61] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.048823  735220 system_pods.go:61] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.048829  735220 system_pods.go:61] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.048834  735220 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.048839  735220 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.048844  735220 system_pods.go:61] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.048848  735220 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.048855  735220 system_pods.go:61] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.048861  735220 system_pods.go:74] duration metric: took 3.500665ms to wait for pod list to return data ...
	I1101 12:02:53.048869  735220 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:02:53.052663  735220 default_sa.go:45] found service account: "default"
	I1101 12:02:53.052712  735220 default_sa.go:55] duration metric: took 3.825305ms for default service account to be created ...
	I1101 12:02:53.052734  735220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 12:02:53.059175  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.059265  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.059289  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.059328  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.059358  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.059378  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.059416  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.059437  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.059469  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.059528  735220 retry.go:31] will retry after 214.6601ms: missing components: kube-dns
	I1101 12:02:53.280714  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.280745  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.280752  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.280758  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.280762  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.280767  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.280770  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.280775  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.280782  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.280796  735220 retry.go:31] will retry after 322.159037ms: missing components: kube-dns
	I1101 12:02:53.606672  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.606705  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:02:53.606712  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.606719  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.606723  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.606727  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.606733  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.606737  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.606745  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 12:02:53.606760  735220 retry.go:31] will retry after 316.945096ms: missing components: kube-dns
	I1101 12:02:53.934099  735220 system_pods.go:86] 8 kube-system pods found
	I1101 12:02:53.934231  735220 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Running
	I1101 12:02:53.934287  735220 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running
	I1101 12:02:53.934323  735220 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:02:53.934352  735220 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running
	I1101 12:02:53.934399  735220 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running
	I1101 12:02:53.934430  735220 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:02:53.934481  735220 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running
	I1101 12:02:53.934529  735220 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Running
	I1101 12:02:53.934562  735220 system_pods.go:126] duration metric: took 881.815622ms to wait for k8s-apps to be running ...
	I1101 12:02:53.934623  735220 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 12:02:53.934747  735220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:02:53.957397  735220 system_svc.go:56] duration metric: took 22.771609ms WaitForService to wait for kubelet
	I1101 12:02:53.957424  735220 kubeadm.go:587] duration metric: took 42.559749043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:02:53.957450  735220 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:02:53.961797  735220 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:02:53.961836  735220 node_conditions.go:123] node cpu capacity is 2
	I1101 12:02:53.961856  735220 node_conditions.go:105] duration metric: took 4.400048ms to run NodePressure ...
	I1101 12:02:53.961869  735220 start.go:242] waiting for startup goroutines ...
	I1101 12:02:53.961877  735220 start.go:247] waiting for cluster config update ...
	I1101 12:02:53.961888  735220 start.go:256] writing updated cluster config ...
	I1101 12:02:53.962236  735220 ssh_runner.go:195] Run: rm -f paused
	I1101 12:02:53.967844  735220 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:02:53.972134  735220 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.976963  735220 pod_ready.go:94] pod "coredns-66bc5c9577-czvv4" is "Ready"
	I1101 12:02:53.976989  735220 pod_ready.go:86] duration metric: took 4.827936ms for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.979442  735220 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.984113  735220 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:53.984143  735220 pod_ready.go:86] duration metric: took 4.67476ms for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.986754  735220 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.991981  735220 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:53.992020  735220 pod_ready.go:86] duration metric: took 5.236917ms for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:53.995095  735220 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:54.371944  735220 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:54.371973  735220 pod_ready.go:86] duration metric: took 376.850795ms for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:54.571896  735220 pod_ready.go:83] waiting for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:54.972543  735220 pod_ready.go:94] pod "kube-proxy-7bbw7" is "Ready"
	I1101 12:02:54.972566  735220 pod_ready.go:86] duration metric: took 400.644169ms for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:55.175953  735220 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:55.572218  735220 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772362" is "Ready"
	I1101 12:02:55.572251  735220 pod_ready.go:86] duration metric: took 396.270148ms for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:02:55.572267  735220 pod_ready.go:40] duration metric: took 1.604376454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:02:55.645071  735220 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:02:55.648372  735220 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772362" cluster and "default" namespace by default
	I1101 12:02:55.377713  742300 out.go:252] * Restarting existing docker container for "newest-cni-915456" ...
	I1101 12:02:55.377805  742300 cli_runner.go:164] Run: docker start newest-cni-915456
	I1101 12:02:55.668898  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:02:55.719097  742300 kic.go:430] container "newest-cni-915456" state is running.
	I1101 12:02:55.719477  742300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:02:55.745862  742300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/config.json ...
	I1101 12:02:55.746093  742300 machine.go:94] provisionDockerMachine start ...
	I1101 12:02:55.746155  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:55.767463  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:55.767776  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:02:55.767788  742300 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:02:55.768440  742300 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 12:02:58.921386  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-915456
	
	I1101 12:02:58.921414  742300 ubuntu.go:182] provisioning hostname "newest-cni-915456"
	I1101 12:02:58.921479  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:58.939538  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:58.939853  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:02:58.939871  742300 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-915456 && echo "newest-cni-915456" | sudo tee /etc/hostname
	I1101 12:02:59.103258  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-915456
	
	I1101 12:02:59.103352  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:59.127529  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:02:59.127845  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:02:59.127867  742300 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-915456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-915456/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-915456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:02:59.278130  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:02:59.278154  742300 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:02:59.278183  742300 ubuntu.go:190] setting up certificates
	I1101 12:02:59.278199  742300 provision.go:84] configureAuth start
	I1101 12:02:59.278259  742300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:02:59.297213  742300 provision.go:143] copyHostCerts
	I1101 12:02:59.297292  742300 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:02:59.297311  742300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:02:59.297408  742300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:02:59.297580  742300 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:02:59.297592  742300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:02:59.297633  742300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:02:59.297770  742300 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:02:59.297781  742300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:02:59.297829  742300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:02:59.297916  742300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.newest-cni-915456 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-915456]
	I1101 12:02:59.896330  742300 provision.go:177] copyRemoteCerts
	I1101 12:02:59.896440  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:02:59.896515  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:02:59.914686  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:00.063883  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 12:03:00.108301  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:03:00.178457  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 12:03:00.248735  742300 provision.go:87] duration metric: took 970.518964ms to configureAuth
	I1101 12:03:00.248762  742300 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:03:00.248991  742300 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:00.249155  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:00.329441  742300 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:00.329886  742300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1101 12:03:00.329905  742300 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:03:00.698138  742300 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:03:00.698165  742300 machine.go:97] duration metric: took 4.952059354s to provisionDockerMachine
	I1101 12:03:00.698177  742300 start.go:293] postStartSetup for "newest-cni-915456" (driver="docker")
	I1101 12:03:00.698188  742300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:03:00.698251  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:03:00.698313  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:00.719770  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:00.826001  742300 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:03:00.829465  742300 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:03:00.829497  742300 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:03:00.829510  742300 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:03:00.829565  742300 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:03:00.829646  742300 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:03:00.829793  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:03:00.837903  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:00.858360  742300 start.go:296] duration metric: took 160.167322ms for postStartSetup
	I1101 12:03:00.858469  742300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:03:00.858520  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:00.875867  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:00.978729  742300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:03:00.984107  742300 fix.go:56] duration metric: took 5.627820441s for fixHost
	I1101 12:03:00.984142  742300 start.go:83] releasing machines lock for "newest-cni-915456", held for 5.627888282s
	I1101 12:03:00.984222  742300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-915456
	I1101 12:03:01.003035  742300 ssh_runner.go:195] Run: cat /version.json
	I1101 12:03:01.003103  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:01.003223  742300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:03:01.003279  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:01.024454  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:01.031352  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:01.125655  742300 ssh_runner.go:195] Run: systemctl --version
	I1101 12:03:01.223022  742300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:03:01.265485  742300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:03:01.270236  742300 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:03:01.270311  742300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:03:01.278980  742300 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 12:03:01.279004  742300 start.go:496] detecting cgroup driver to use...
	I1101 12:03:01.279038  742300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:03:01.279087  742300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:03:01.295098  742300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:03:01.308561  742300 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:03:01.308628  742300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:03:01.325162  742300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:03:01.344433  742300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:03:01.466028  742300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:03:01.590309  742300 docker.go:234] disabling docker service ...
	I1101 12:03:01.590428  742300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:03:01.606044  742300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:03:01.619039  742300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:03:01.737193  742300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:03:01.857808  742300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:03:01.870765  742300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:03:01.884834  742300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:03:01.884944  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.894394  742300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:03:01.894470  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.903381  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.912372  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.921179  742300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:03:01.929237  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.938202  742300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.946628  742300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:01.955746  742300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:03:01.963390  742300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:03:01.970762  742300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:02.090551  742300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:03:02.230555  742300 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:03:02.230684  742300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:03:02.234641  742300 start.go:564] Will wait 60s for crictl version
	I1101 12:03:02.234757  742300 ssh_runner.go:195] Run: which crictl
	I1101 12:03:02.238409  742300 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:03:02.263419  742300 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:03:02.263515  742300 ssh_runner.go:195] Run: crio --version
	I1101 12:03:02.292976  742300 ssh_runner.go:195] Run: crio --version
	I1101 12:03:02.326610  742300 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:03:02.329498  742300 cli_runner.go:164] Run: docker network inspect newest-cni-915456 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:03:02.346289  742300 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 12:03:02.350263  742300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:02.363268  742300 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 12:03:02.365999  742300 kubeadm.go:884] updating cluster {Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:03:02.366147  742300 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:02.366226  742300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:02.399320  742300 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:02.399343  742300 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:03:02.399409  742300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:02.426206  742300 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:02.426231  742300 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:03:02.426240  742300 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 12:03:02.426341  742300 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-915456 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:03:02.426430  742300 ssh_runner.go:195] Run: crio config
	I1101 12:03:02.511464  742300 cni.go:84] Creating CNI manager for ""
	I1101 12:03:02.511488  742300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:03:02.511511  742300 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 12:03:02.511536  742300 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-915456 NodeName:newest-cni-915456 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:03:02.511679  742300 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-915456"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:03:02.511758  742300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:03:02.520041  742300 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:03:02.520131  742300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:03:02.527930  742300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 12:03:02.541649  742300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:03:02.563865  742300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 12:03:02.578660  742300 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:03:02.582471  742300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:02.592964  742300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:02.703925  742300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:02.721362  742300 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456 for IP: 192.168.76.2
	I1101 12:03:02.721393  742300 certs.go:195] generating shared ca certs ...
	I1101 12:03:02.721410  742300 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:02.721578  742300 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:03:02.721637  742300 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:03:02.721650  742300 certs.go:257] generating profile certs ...
	I1101 12:03:02.721812  742300 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/client.key
	I1101 12:03:02.721891  742300 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key.4fb12c14
	I1101 12:03:02.721956  742300 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key
	I1101 12:03:02.722081  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:03:02.722123  742300 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:03:02.722138  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:03:02.722165  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:03:02.722202  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:03:02.722231  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:03:02.722286  742300 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:02.722946  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:03:02.742888  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:03:02.759545  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:03:02.776109  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:03:02.799826  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 12:03:02.827576  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:03:02.849554  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:03:02.875572  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/newest-cni-915456/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 12:03:02.905654  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:03:02.930783  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:03:02.950391  742300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:03:02.971052  742300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:03:02.985185  742300 ssh_runner.go:195] Run: openssl version
	I1101 12:03:02.991714  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:03:03.000341  742300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:03:03.006664  742300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:03:03.006759  742300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:03:03.058297  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:03:03.067007  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:03:03.075850  742300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:03.079839  742300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:03.079905  742300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:03.120911  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:03:03.128883  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:03:03.138026  742300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:03:03.141876  742300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:03:03.141955  742300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:03:03.182969  742300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:03:03.191093  742300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:03:03.194870  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 12:03:03.236383  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 12:03:03.277943  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 12:03:03.320120  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 12:03:03.380228  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 12:03:03.465231  742300 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 12:03:03.518647  742300 kubeadm.go:401] StartCluster: {Name:newest-cni-915456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-915456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:03:03.518771  742300 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:03:03.518886  742300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:03:03.620358  742300 cri.go:89] found id: "e735e98659987111572eec249f828f7621bfaba194220e2c493a43e703434f5e"
	I1101 12:03:03.620405  742300 cri.go:89] found id: "692d04809b9f0753902fb84cccb9fca957c437d518ababe36294a45488b0a1ff"
	I1101 12:03:03.620427  742300 cri.go:89] found id: "e6d473f5be1fd68186a2bdf1e8a283616a64e2e4850a5aede158448888d098b7"
	I1101 12:03:03.620438  742300 cri.go:89] found id: "604ffe25b066ea1ca6f3cb37923272814ecc5129a5eb18e635d4fa3cf43a27e8"
	I1101 12:03:03.620442  742300 cri.go:89] found id: ""
	I1101 12:03:03.620509  742300 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 12:03:03.637251  742300 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:03Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:03:03.637365  742300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:03:03.649760  742300 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 12:03:03.649795  742300 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 12:03:03.649885  742300 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 12:03:03.659593  742300 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 12:03:03.660244  742300 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-915456" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:03.660631  742300 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-915456" cluster setting kubeconfig missing "newest-cni-915456" context setting]
	I1101 12:03:03.661183  742300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:03.662931  742300 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 12:03:03.675027  742300 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 12:03:03.675061  742300 kubeadm.go:602] duration metric: took 25.25942ms to restartPrimaryControlPlane
	I1101 12:03:03.675071  742300 kubeadm.go:403] duration metric: took 156.440898ms to StartCluster
	I1101 12:03:03.675120  742300 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:03.675201  742300 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:03.676207  742300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:03.676486  742300 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:03:03.676885  742300 config.go:182] Loaded profile config "newest-cni-915456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:03.676967  742300 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:03:03.677155  742300 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-915456"
	I1101 12:03:03.677200  742300 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-915456"
	W1101 12:03:03.677223  742300 addons.go:248] addon storage-provisioner should already be in state true
	I1101 12:03:03.677275  742300 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:03.677840  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.681616  742300 addons.go:70] Setting default-storageclass=true in profile "newest-cni-915456"
	I1101 12:03:03.681673  742300 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-915456"
	I1101 12:03:03.681770  742300 addons.go:70] Setting dashboard=true in profile "newest-cni-915456"
	I1101 12:03:03.681809  742300 addons.go:239] Setting addon dashboard=true in "newest-cni-915456"
	W1101 12:03:03.681822  742300 addons.go:248] addon dashboard should already be in state true
	I1101 12:03:03.681851  742300 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:03.682088  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.682371  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.684681  742300 out.go:179] * Verifying Kubernetes components...
	I1101 12:03:03.688181  742300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:03.734816  742300 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 12:03:03.740926  742300 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 12:03:03.741988  742300 addons.go:239] Setting addon default-storageclass=true in "newest-cni-915456"
	W1101 12:03:03.742006  742300 addons.go:248] addon default-storageclass should already be in state true
	I1101 12:03:03.742032  742300 host.go:66] Checking if "newest-cni-915456" exists ...
	I1101 12:03:03.742524  742300 cli_runner.go:164] Run: docker container inspect newest-cni-915456 --format={{.State.Status}}
	I1101 12:03:03.745820  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 12:03:03.745840  742300 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 12:03:03.745900  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:03.748010  742300 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 12:03:03.751678  742300 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:03.751698  742300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:03:03.751761  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:03.789889  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:03.813761  742300 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:03.813791  742300 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:03:03.813864  742300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-915456
	I1101 12:03:03.815245  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:03.847186  742300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/newest-cni-915456/id_rsa Username:docker}
	I1101 12:03:04.043475  742300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:04.104972  742300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:04.124695  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 12:03:04.124717  742300 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 12:03:04.223345  742300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:04.320131  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 12:03:04.320153  742300 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 12:03:04.424610  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 12:03:04.424640  742300 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 12:03:04.542767  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 12:03:04.542790  742300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 12:03:04.615073  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 12:03:04.615101  742300 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 12:03:04.679410  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 12:03:04.679440  742300 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 12:03:04.726161  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 12:03:04.726182  742300 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 12:03:04.764773  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 12:03:04.764794  742300 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 12:03:04.809930  742300 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:03:04.809956  742300 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 12:03:04.839568  742300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:03:12.023934  742300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.980424779s)
	I1101 12:03:12.023982  742300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.918991517s)
	I1101 12:03:12.024297  742300 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.800925868s)
	I1101 12:03:12.024328  742300 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:03:12.024421  742300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:03:12.074521  742300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.234904447s)
	I1101 12:03:12.076871  742300 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-915456 addons enable metrics-server
	
	I1101 12:03:12.078700  742300 api_server.go:72] duration metric: took 8.402177817s to wait for apiserver process to appear ...
	I1101 12:03:12.078789  742300 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:03:12.078823  742300 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 12:03:12.083439  742300 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 12:03:12.086228  742300 addons.go:515] duration metric: took 8.409251154s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 12:03:12.091144  742300 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 12:03:12.091167  742300 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 12:03:12.579801  742300 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 12:03:12.589139  742300 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 12:03:12.590331  742300 api_server.go:141] control plane version: v1.34.1
	I1101 12:03:12.590353  742300 api_server.go:131] duration metric: took 511.543844ms to wait for apiserver health ...
	I1101 12:03:12.590363  742300 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:03:12.596255  742300 system_pods.go:59] 8 kube-system pods found
	I1101 12:03:12.596339  742300 system_pods.go:61] "coredns-66bc5c9577-fwd4w" [18c6c47e-3e00-4794-887a-a05b3478a545] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 12:03:12.596365  742300 system_pods.go:61] "etcd-newest-cni-915456" [c1377a6a-0f63-41c5-94d9-1c1bcf7c0049] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:03:12.596404  742300 system_pods.go:61] "kindnet-xtbw2" [f91412bc-141d-4706-a3b4-f173a4a731a3] Running
	I1101 12:03:12.596432  742300 system_pods.go:61] "kube-apiserver-newest-cni-915456" [86dc9fc3-c717-40db-b0cf-633dfdb0ea87] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:03:12.596470  742300 system_pods.go:61] "kube-controller-manager-newest-cni-915456" [cef60eef-ea38-49dc-b7e8-219972759c49] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:03:12.596504  742300 system_pods.go:61] "kube-proxy-4cxmx" [bf13f387-a80a-4910-8fef-45c3ace6b6c8] Running
	I1101 12:03:12.596528  742300 system_pods.go:61] "kube-scheduler-newest-cni-915456" [8d027bf3-40c5-4f8f-92f3-ae047cd94a2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:03:12.596547  742300 system_pods.go:61] "storage-provisioner" [693b39e3-8e8a-4380-8304-7513694bb16c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 12:03:12.596571  742300 system_pods.go:74] duration metric: took 6.202608ms to wait for pod list to return data ...
	I1101 12:03:12.596603  742300 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:03:12.599423  742300 default_sa.go:45] found service account: "default"
	I1101 12:03:12.599488  742300 default_sa.go:55] duration metric: took 2.862502ms for default service account to be created ...
	I1101 12:03:12.599515  742300 kubeadm.go:587] duration metric: took 8.922997931s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 12:03:12.599563  742300 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:03:12.604191  742300 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:03:12.604273  742300 node_conditions.go:123] node cpu capacity is 2
	I1101 12:03:12.604300  742300 node_conditions.go:105] duration metric: took 4.713226ms to run NodePressure ...
	I1101 12:03:12.604327  742300 start.go:242] waiting for startup goroutines ...
	I1101 12:03:12.604364  742300 start.go:247] waiting for cluster config update ...
	I1101 12:03:12.604399  742300 start.go:256] writing updated cluster config ...
	I1101 12:03:12.604714  742300 ssh_runner.go:195] Run: rm -f paused
	I1101 12:03:12.688369  742300 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:03:12.693744  742300 out.go:179] * Done! kubectl is now configured to use "newest-cni-915456" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.158661251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.165060948Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-4cxmx/POD" id=3f12692e-3582-4e33-9673-62b6e88f422b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.165136822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.18164481Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3f12692e-3582-4e33-9673-62b6e88f422b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.1899714Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0cdb5f8c-c84c-4c9c-a913-1aaa387c08ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.213133594Z" level=info msg="Ran pod sandbox c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6 with infra container: kube-system/kube-proxy-4cxmx/POD" id=3f12692e-3582-4e33-9673-62b6e88f422b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.213200746Z" level=info msg="Ran pod sandbox 2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71 with infra container: kube-system/kindnet-xtbw2/POD" id=0cdb5f8c-c84c-4c9c-a913-1aaa387c08ba name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.223626522Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fd4e9dab-bae9-4ad7-a178-a17a6c3075cf name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.224566243Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=237d8561-7036-4885-b1a3-5e6c57e41ea6 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.224629965Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9929ea59-2ee0-4e5d-8949-a23e594a3713 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.22637772Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2fcb3db7-f187-4c5c-bcc6-96f0b67e8e0d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.232592004Z" level=info msg="Creating container: kube-system/kube-proxy-4cxmx/kube-proxy" id=1eb70559-0f92-4590-a9cf-0e28910523da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.232704809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.234657393Z" level=info msg="Creating container: kube-system/kindnet-xtbw2/kindnet-cni" id=e77016d9-ff9e-4b6d-bed2-63842f23ef02 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.234943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.253729746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.254976154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.279981384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.289928663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.324984Z" level=info msg="Created container 4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f: kube-system/kindnet-xtbw2/kindnet-cni" id=e77016d9-ff9e-4b6d-bed2-63842f23ef02 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.327512927Z" level=info msg="Starting container: 4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f" id=cd7b7a95-eef6-41a4-9ab2-1efab67951b8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.338122336Z" level=info msg="Started container" PID=1056 containerID=4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f description=kube-system/kindnet-xtbw2/kindnet-cni id=cd7b7a95-eef6-41a4-9ab2-1efab67951b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.417403574Z" level=info msg="Created container 5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48: kube-system/kube-proxy-4cxmx/kube-proxy" id=1eb70559-0f92-4590-a9cf-0e28910523da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.419491823Z" level=info msg="Starting container: 5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48" id=94a37a87-20bd-4f74-b675-02ce1e6a5c72 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:03:11 newest-cni-915456 crio[610]: time="2025-11-01T12:03:11.422782507Z" level=info msg="Started container" PID=1060 containerID=5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48 description=kube-system/kube-proxy-4cxmx/kube-proxy id=94a37a87-20bd-4f74-b675-02ce1e6a5c72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4cf3cdf4a4145       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   2aa1b10678ba2       kindnet-xtbw2                               kube-system
	5bd6f90684439       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   c07fb4a269dfc       kube-proxy-4cxmx                            kube-system
	e735e98659987       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            1                   07aeb8b2143d6       kube-scheduler-newest-cni-915456            kube-system
	692d04809b9f0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            1                   036fcfd8da859       kube-apiserver-newest-cni-915456            kube-system
	e6d473f5be1fd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   1                   7d29c18894f65       kube-controller-manager-newest-cni-915456   kube-system
	604ffe25b066e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      1                   5b59338dd8b2d       etcd-newest-cni-915456                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-915456
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-915456
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=newest-cni-915456
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T12_02_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 12:02:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-915456
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:03:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:03:10 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:03:10 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:03:10 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 12:03:10 +0000   Sat, 01 Nov 2025 12:02:37 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-915456
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0c03d2de-2716-4951-b7fa-b9e1f188afd7
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-915456                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-xtbw2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-915456             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-915456    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-4cxmx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-915456             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-915456 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-915456 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-915456 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-915456 event: Registered Node newest-cni-915456 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 16s)  kubelet          Node newest-cni-915456 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 16s)  kubelet          Node newest-cni-915456 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 16s)  kubelet          Node newest-cni-915456 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-915456 event: Registered Node newest-cni-915456 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:40] overlayfs: idmapped layers are currently not supported
	[ +15.947160] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	[ +52.263508] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:02] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [604ffe25b066ea1ca6f3cb37923272814ecc5129a5eb18e635d4fa3cf43a27e8] <==
	{"level":"warn","ts":"2025-11-01T12:03:08.782231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.810889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.827846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.866146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.874623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.884649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.906954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.923986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.965219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.983723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:08.998795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.014717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.030181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.048403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.065948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.090889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.106188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.143830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.146057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.163922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.186274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.209601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.226515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.246716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:09.361115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54842","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:03:18 up  3:45,  0 user,  load average: 3.83, 3.78, 3.03
	Linux newest-cni-915456 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4cf3cdf4a41458583de0df10d3c2942088cb6ba41083fb9c2be924ee873eff0f] <==
	I1101 12:03:11.440209       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:03:11.522158       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 12:03:11.522285       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:03:11.522299       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:03:11.522314       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:03:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:03:11.718018       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:03:11.720055       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:03:11.720902       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:03:11.721082       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [692d04809b9f0753902fb84cccb9fca957c437d518ababe36294a45488b0a1ff] <==
	I1101 12:03:10.640409       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 12:03:10.640417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 12:03:10.640424       1 cache.go:39] Caches are synced for autoregister controller
	I1101 12:03:10.651577       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1101 12:03:10.677577       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 12:03:10.685783       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 12:03:10.685817       1 policy_source.go:240] refreshing policies
	I1101 12:03:10.694897       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 12:03:10.695949       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 12:03:10.695991       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 12:03:10.696044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 12:03:10.705101       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 12:03:10.724412       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 12:03:10.764258       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 12:03:10.968981       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 12:03:11.230039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:03:11.508498       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 12:03:11.741491       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:03:11.818204       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:03:11.856680       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:03:12.029019       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.41.60"}
	I1101 12:03:12.066073       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.98.234"}
	I1101 12:03:13.968503       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:03:14.267632       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 12:03:14.322426       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e6d473f5be1fd68186a2bdf1e8a283616a64e2e4850a5aede158448888d098b7] <==
	I1101 12:03:13.925971       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 12:03:13.926010       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 12:03:13.926016       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 12:03:13.926021       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 12:03:13.928297       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 12:03:13.932436       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 12:03:13.934638       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 12:03:13.942322       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 12:03:13.958955       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 12:03:13.959611       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 12:03:13.959768       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 12:03:13.959823       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 12:03:13.965810       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 12:03:13.965897       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 12:03:13.965976       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 12:03:13.966000       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 12:03:13.965979       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 12:03:13.966117       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:03:13.966107       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 12:03:13.965990       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 12:03:13.971862       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 12:03:13.972934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:03:13.973012       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 12:03:13.973022       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 12:03:13.977512       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [5bd6f906844395bc6b2a9c203fb7bec52632013e8a016d7a61cbf06e7f6dea48] <==
	I1101 12:03:12.008811       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:03:12.196510       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:03:12.296736       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:03:12.296769       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 12:03:12.296856       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:03:12.323259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:03:12.323315       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:03:12.327308       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:03:12.327655       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:03:12.327678       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:03:12.328810       1 config.go:200] "Starting service config controller"
	I1101 12:03:12.328828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:03:12.332247       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:03:12.332322       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:03:12.332362       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:03:12.332388       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:03:12.334600       1 config.go:309] "Starting node config controller"
	I1101 12:03:12.334708       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:03:12.334740       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:03:12.428984       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 12:03:12.433330       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:03:12.433336       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e735e98659987111572eec249f828f7621bfaba194220e2c493a43e703434f5e] <==
	I1101 12:03:08.822045       1 serving.go:386] Generated self-signed cert in-memory
	I1101 12:03:12.187487       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 12:03:12.191452       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:03:12.202682       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 12:03:12.202837       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 12:03:12.202896       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 12:03:12.202968       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 12:03:12.204819       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:12.205357       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:12.210463       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:12.204996       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:12.214004       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:12.302999       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 12:03:12.319075       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.439008     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: E1101 12:03:10.812836     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-915456\" already exists" pod="kube-system/kube-controller-manager-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.812872     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.818036     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.818136     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.818168     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.819436     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.845452     726 apiserver.go:52] "Watching apiserver"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: E1101 12:03:10.865808     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-915456\" already exists" pod="kube-system/kube-scheduler-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.865846     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: E1101 12:03:10.904123     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-915456\" already exists" pod="kube-system/etcd-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.904155     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: E1101 12:03:10.933805     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-915456\" already exists" pod="kube-system/kube-apiserver-newest-cni-915456"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.944666     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.944786     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf13f387-a80a-4910-8fef-45c3ace6b6c8-lib-modules\") pod \"kube-proxy-4cxmx\" (UID: \"bf13f387-a80a-4910-8fef-45c3ace6b6c8\") " pod="kube-system/kube-proxy-4cxmx"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.944812     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf13f387-a80a-4910-8fef-45c3ace6b6c8-xtables-lock\") pod \"kube-proxy-4cxmx\" (UID: \"bf13f387-a80a-4910-8fef-45c3ace6b6c8\") " pod="kube-system/kube-proxy-4cxmx"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.945520     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f91412bc-141d-4706-a3b4-f173a4a731a3-cni-cfg\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.945569     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f91412bc-141d-4706-a3b4-f173a4a731a3-xtables-lock\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.945587     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f91412bc-141d-4706-a3b4-f173a4a731a3-lib-modules\") pod \"kindnet-xtbw2\" (UID: \"f91412bc-141d-4706-a3b4-f173a4a731a3\") " pod="kube-system/kindnet-xtbw2"
	Nov 01 12:03:10 newest-cni-915456 kubelet[726]: I1101 12:03:10.986206     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 12:03:11 newest-cni-915456 kubelet[726]: W1101 12:03:11.204950     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/crio-c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6 WatchSource:0}: Error finding container c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6: Status 404 returned error can't find the container with id c07fb4a269dfcc2cf051497eff56de26be1fa9ec042b92ec57b7a4e65908c3e6
	Nov 01 12:03:11 newest-cni-915456 kubelet[726]: W1101 12:03:11.206144     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/888185dcceae55c6342bd31e38b604a580ffef9378330fc84aad429bd443b74e/crio-2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71 WatchSource:0}: Error finding container 2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71: Status 404 returned error can't find the container with id 2aa1b10678ba2d1968f57a41d90de6372cc8f2124f00a34cf887f365d01d3b71
	Nov 01 12:03:13 newest-cni-915456 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 12:03:13 newest-cni-915456 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 12:03:13 newest-cni-915456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-915456 -n newest-cni-915456
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-915456 -n newest-cni-915456: exit status 2 (350.08482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-915456 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-fwd4w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lphz7 kubernetes-dashboard-855c9754f9-gqnkg
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-915456 describe pod coredns-66bc5c9577-fwd4w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lphz7 kubernetes-dashboard-855c9754f9-gqnkg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-915456 describe pod coredns-66bc5c9577-fwd4w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lphz7 kubernetes-dashboard-855c9754f9-gqnkg: exit status 1 (80.893961ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-fwd4w" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-lphz7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gqnkg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-915456 describe pod coredns-66bc5c9577-fwd4w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lphz7 kubernetes-dashboard-855c9754f9-gqnkg: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-772362 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-772362 --alsologtostderr -v=1: exit status 80 (2.522692882s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-772362 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 12:04:27.724660  750888 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:04:27.724817  750888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:04:27.724830  750888 out.go:374] Setting ErrFile to fd 2...
	I1101 12:04:27.724837  750888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:04:27.725102  750888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:04:27.725359  750888 out.go:368] Setting JSON to false
	I1101 12:04:27.725381  750888 mustload.go:66] Loading cluster: default-k8s-diff-port-772362
	I1101 12:04:27.725813  750888 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:04:27.726255  750888 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:04:27.745181  750888 host.go:66] Checking if "default-k8s-diff-port-772362" exists ...
	I1101 12:04:27.745529  750888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:04:27.809840  750888 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 12:04:27.79970075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:04:27.810488  750888 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-772362 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 12:04:27.814251  750888 out.go:179] * Pausing node default-k8s-diff-port-772362 ... 
	I1101 12:04:27.817447  750888 host.go:66] Checking if "default-k8s-diff-port-772362" exists ...
	I1101 12:04:27.817866  750888 ssh_runner.go:195] Run: systemctl --version
	I1101 12:04:27.817936  750888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:04:27.835612  750888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:04:27.940585  750888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:04:27.954931  750888 pause.go:52] kubelet running: true
	I1101 12:04:27.955018  750888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:04:28.224756  750888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:04:28.224843  750888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:04:28.316733  750888 cri.go:89] found id: "ccb3e9649abb4d3db8b3d243402c03bb237c2ba79fff3fbf00f84ea8b516b9ab"
	I1101 12:04:28.316757  750888 cri.go:89] found id: "60d058208068e15a38ab1917ed435ff30df2904bc304c752ea4a5232e31e1ff9"
	I1101 12:04:28.316762  750888 cri.go:89] found id: "1045dd3947bb80515dc0cc7a58d04eef3d54108be2c3a2a779a3731110c50a24"
	I1101 12:04:28.316766  750888 cri.go:89] found id: "ae1f673a830aae14249b0aa15c1f704cf4fe946dada0b3da9657525bdd91b06e"
	I1101 12:04:28.316769  750888 cri.go:89] found id: "00aed308344f086574af655c9996a7b641715d301430dc08c96ff996ef60c175"
	I1101 12:04:28.316773  750888 cri.go:89] found id: "81b640d642c4a033a2066adee4e3f0b09cae8a8df5d4558591aa4e5f194359cf"
	I1101 12:04:28.316777  750888 cri.go:89] found id: "f96bb403d6b6c123105828e3f84d5ebf20a34529af731f64c66cb9c0669a5093"
	I1101 12:04:28.316780  750888 cri.go:89] found id: "302efc83dc595d0d69aa551f9cc9f21aea9f5603913f8c8a601f65423c799822"
	I1101 12:04:28.316784  750888 cri.go:89] found id: "53604a992cb8b97edf6f8b57e315089f1b817fa526ca575f87c8d55f22389249"
	I1101 12:04:28.316790  750888 cri.go:89] found id: "317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036"
	I1101 12:04:28.316794  750888 cri.go:89] found id: "866787adebf458b16bf91276a5d497a0448a1e79a43137ae5cc98aedb84d2c3c"
	I1101 12:04:28.316798  750888 cri.go:89] found id: ""
	I1101 12:04:28.316847  750888 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:04:28.328671  750888 retry.go:31] will retry after 271.298699ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:04:28Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:04:28.600115  750888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:04:28.613907  750888 pause.go:52] kubelet running: false
	I1101 12:04:28.613983  750888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:04:28.789371  750888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:04:28.789492  750888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:04:28.872476  750888 cri.go:89] found id: "ccb3e9649abb4d3db8b3d243402c03bb237c2ba79fff3fbf00f84ea8b516b9ab"
	I1101 12:04:28.872494  750888 cri.go:89] found id: "60d058208068e15a38ab1917ed435ff30df2904bc304c752ea4a5232e31e1ff9"
	I1101 12:04:28.872499  750888 cri.go:89] found id: "1045dd3947bb80515dc0cc7a58d04eef3d54108be2c3a2a779a3731110c50a24"
	I1101 12:04:28.872512  750888 cri.go:89] found id: "ae1f673a830aae14249b0aa15c1f704cf4fe946dada0b3da9657525bdd91b06e"
	I1101 12:04:28.872516  750888 cri.go:89] found id: "00aed308344f086574af655c9996a7b641715d301430dc08c96ff996ef60c175"
	I1101 12:04:28.872520  750888 cri.go:89] found id: "81b640d642c4a033a2066adee4e3f0b09cae8a8df5d4558591aa4e5f194359cf"
	I1101 12:04:28.872523  750888 cri.go:89] found id: "f96bb403d6b6c123105828e3f84d5ebf20a34529af731f64c66cb9c0669a5093"
	I1101 12:04:28.872525  750888 cri.go:89] found id: "302efc83dc595d0d69aa551f9cc9f21aea9f5603913f8c8a601f65423c799822"
	I1101 12:04:28.872528  750888 cri.go:89] found id: "53604a992cb8b97edf6f8b57e315089f1b817fa526ca575f87c8d55f22389249"
	I1101 12:04:28.872534  750888 cri.go:89] found id: "317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036"
	I1101 12:04:28.872537  750888 cri.go:89] found id: "866787adebf458b16bf91276a5d497a0448a1e79a43137ae5cc98aedb84d2c3c"
	I1101 12:04:28.872540  750888 cri.go:89] found id: ""
	I1101 12:04:28.872607  750888 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:04:28.887575  750888 retry.go:31] will retry after 243.364796ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:04:28Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:04:29.132129  750888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:04:29.146133  750888 pause.go:52] kubelet running: false
	I1101 12:04:29.146220  750888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:04:29.313456  750888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:04:29.313629  750888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:04:29.390036  750888 cri.go:89] found id: "ccb3e9649abb4d3db8b3d243402c03bb237c2ba79fff3fbf00f84ea8b516b9ab"
	I1101 12:04:29.390066  750888 cri.go:89] found id: "60d058208068e15a38ab1917ed435ff30df2904bc304c752ea4a5232e31e1ff9"
	I1101 12:04:29.390071  750888 cri.go:89] found id: "1045dd3947bb80515dc0cc7a58d04eef3d54108be2c3a2a779a3731110c50a24"
	I1101 12:04:29.390075  750888 cri.go:89] found id: "ae1f673a830aae14249b0aa15c1f704cf4fe946dada0b3da9657525bdd91b06e"
	I1101 12:04:29.390078  750888 cri.go:89] found id: "00aed308344f086574af655c9996a7b641715d301430dc08c96ff996ef60c175"
	I1101 12:04:29.390081  750888 cri.go:89] found id: "81b640d642c4a033a2066adee4e3f0b09cae8a8df5d4558591aa4e5f194359cf"
	I1101 12:04:29.390084  750888 cri.go:89] found id: "f96bb403d6b6c123105828e3f84d5ebf20a34529af731f64c66cb9c0669a5093"
	I1101 12:04:29.390107  750888 cri.go:89] found id: "302efc83dc595d0d69aa551f9cc9f21aea9f5603913f8c8a601f65423c799822"
	I1101 12:04:29.390119  750888 cri.go:89] found id: "53604a992cb8b97edf6f8b57e315089f1b817fa526ca575f87c8d55f22389249"
	I1101 12:04:29.390139  750888 cri.go:89] found id: "317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036"
	I1101 12:04:29.390151  750888 cri.go:89] found id: "866787adebf458b16bf91276a5d497a0448a1e79a43137ae5cc98aedb84d2c3c"
	I1101 12:04:29.390154  750888 cri.go:89] found id: ""
	I1101 12:04:29.390223  750888 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:04:29.401874  750888 retry.go:31] will retry after 353.608993ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:04:29Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:04:29.756509  750888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:04:29.770813  750888 pause.go:52] kubelet running: false
	I1101 12:04:29.770940  750888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 12:04:29.959231  750888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 12:04:29.959357  750888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 12:04:30.140041  750888 cri.go:89] found id: "ccb3e9649abb4d3db8b3d243402c03bb237c2ba79fff3fbf00f84ea8b516b9ab"
	I1101 12:04:30.140141  750888 cri.go:89] found id: "60d058208068e15a38ab1917ed435ff30df2904bc304c752ea4a5232e31e1ff9"
	I1101 12:04:30.140172  750888 cri.go:89] found id: "1045dd3947bb80515dc0cc7a58d04eef3d54108be2c3a2a779a3731110c50a24"
	I1101 12:04:30.140195  750888 cri.go:89] found id: "ae1f673a830aae14249b0aa15c1f704cf4fe946dada0b3da9657525bdd91b06e"
	I1101 12:04:30.140206  750888 cri.go:89] found id: "00aed308344f086574af655c9996a7b641715d301430dc08c96ff996ef60c175"
	I1101 12:04:30.140211  750888 cri.go:89] found id: "81b640d642c4a033a2066adee4e3f0b09cae8a8df5d4558591aa4e5f194359cf"
	I1101 12:04:30.140214  750888 cri.go:89] found id: "f96bb403d6b6c123105828e3f84d5ebf20a34529af731f64c66cb9c0669a5093"
	I1101 12:04:30.140217  750888 cri.go:89] found id: "302efc83dc595d0d69aa551f9cc9f21aea9f5603913f8c8a601f65423c799822"
	I1101 12:04:30.140220  750888 cri.go:89] found id: "53604a992cb8b97edf6f8b57e315089f1b817fa526ca575f87c8d55f22389249"
	I1101 12:04:30.140233  750888 cri.go:89] found id: "317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036"
	I1101 12:04:30.140237  750888 cri.go:89] found id: "866787adebf458b16bf91276a5d497a0448a1e79a43137ae5cc98aedb84d2c3c"
	I1101 12:04:30.140239  750888 cri.go:89] found id: ""
	I1101 12:04:30.140297  750888 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 12:04:30.160093  750888 out.go:203] 
	W1101 12:04:30.170144  750888 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:04:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:04:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 12:04:30.170243  750888 out.go:285] * 
	* 
	W1101 12:04:30.178859  750888 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 12:04:30.182100  750888 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-772362 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-772362
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-772362:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0",
	        "Created": "2025-11-01T12:01:37.247472685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 746250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:03:20.866334529Z",
	            "FinishedAt": "2025-11-01T12:03:19.832824414Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/hosts",
	        "LogPath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0-json.log",
	        "Name": "/default-k8s-diff-port-772362",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-772362:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-772362",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0",
	                "LowerDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-772362",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-772362/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-772362",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-772362",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-772362",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1201ae80f644d8b3d59f6381f56e651287d51dd406cdfb1677e35b50426fff7",
	            "SandboxKey": "/var/run/docker/netns/f1201ae80f64",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-772362": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:8a:64:8f:f0:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73eb4efd47c2bd595401a91b3c40a866a38f38c55c2d40593383e02853a1364a",
	                    "EndpointID": "1096f4fc37f42efaf5e73f105e92d1130d1e99e1c26e46598235ee1593434e20",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-772362",
	                        "087d99a3919f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362: exit status 2 (367.420522ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-772362 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-772362 logs -n 25: (1.347937391s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p disable-driver-mounts-783522                                                                                                                                                                                                               │ disable-driver-mounts-783522 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:02 UTC │
	│ image   │ embed-certs-816860 image list --format=json                                                                                                                                                                                                   │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ pause   │ -p embed-certs-816860 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-915456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ stop    │ -p newest-cni-915456 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-915456 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772362 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ image   │ newest-cni-915456 image list --format=json                                                                                                                                                                                                    │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ pause   │ -p newest-cni-915456 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ delete  │ -p newest-cni-915456                                                                                                                                                                                                                          │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-772362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:04 UTC │
	│ delete  │ -p newest-cni-915456                                                                                                                                                                                                                          │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ start   │ -p auto-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-507511                  │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ image   │ default-k8s-diff-port-772362 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:04 UTC │ 01 Nov 25 12:04 UTC │
	│ pause   │ -p default-k8s-diff-port-772362 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:03:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:03:21.999842  746742 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:03:22.000049  746742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:03:22.000076  746742 out.go:374] Setting ErrFile to fd 2...
	I1101 12:03:22.000101  746742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:03:22.000378  746742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:03:22.000845  746742 out.go:368] Setting JSON to false
	I1101 12:03:22.001803  746742 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13551,"bootTime":1761985051,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:03:22.007720  746742 start.go:143] virtualization:  
	I1101 12:03:22.011582  746742 out.go:179] * [auto-507511] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:03:22.015742  746742 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:03:22.015873  746742 notify.go:221] Checking for updates...
	I1101 12:03:22.022113  746742 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:03:22.025115  746742 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:22.028255  746742 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:03:22.031490  746742 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:03:22.034440  746742 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 12:03:22.038049  746742 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:22.038159  746742 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:03:22.070788  746742 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:03:22.070915  746742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:03:22.138895  746742 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-01 12:03:22.124625527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:03:22.138998  746742 docker.go:319] overlay module found
	I1101 12:03:22.142132  746742 out.go:179] * Using the docker driver based on user configuration
	I1101 12:03:22.145064  746742 start.go:309] selected driver: docker
	I1101 12:03:22.145090  746742 start.go:930] validating driver "docker" against <nil>
	I1101 12:03:22.145104  746742 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:03:22.145909  746742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:03:22.200874  746742 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-01 12:03:22.191435811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:03:22.201028  746742 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 12:03:22.201281  746742 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:03:22.204291  746742 out.go:179] * Using Docker driver with root privileges
	I1101 12:03:22.207146  746742 cni.go:84] Creating CNI manager for ""
	I1101 12:03:22.207227  746742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:03:22.207242  746742 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 12:03:22.207337  746742 start.go:353] cluster config:
	{Name:auto-507511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1101 12:03:22.210488  746742 out.go:179] * Starting "auto-507511" primary control-plane node in "auto-507511" cluster
	I1101 12:03:22.213318  746742 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:03:22.216503  746742 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:03:22.219439  746742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:22.219520  746742 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:03:22.219524  746742 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:03:22.219535  746742 cache.go:59] Caching tarball of preloaded images
	I1101 12:03:22.219635  746742 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:03:22.219646  746742 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:03:22.219773  746742 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/config.json ...
	I1101 12:03:22.219802  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/config.json: {Name:mkec428b9955b09281a48807c19dca6bbb8cf781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:22.239226  746742 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:03:22.239252  746742 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:03:22.239271  746742 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:03:22.239296  746742 start.go:360] acquireMachinesLock for auto-507511: {Name:mkd1ed91bd009dfe0cb30a20b07d722c9cbc0c63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:03:22.239407  746742 start.go:364] duration metric: took 91.234µs to acquireMachinesLock for "auto-507511"
	I1101 12:03:22.239439  746742 start.go:93] Provisioning new machine with config: &{Name:auto-507511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:03:22.239527  746742 start.go:125] createHost starting for "" (driver="docker")
	I1101 12:03:20.833772  746003 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-772362" ...
	I1101 12:03:20.833854  746003 cli_runner.go:164] Run: docker start default-k8s-diff-port-772362
	I1101 12:03:21.163800  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:21.184042  746003 kic.go:430] container "default-k8s-diff-port-772362" state is running.
	I1101 12:03:21.184407  746003 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:03:21.212152  746003 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/config.json ...
	I1101 12:03:21.213350  746003 machine.go:94] provisionDockerMachine start ...
	I1101 12:03:21.213443  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:21.238857  746003 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:21.239182  746003 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33820 <nil> <nil>}
	I1101 12:03:21.239192  746003 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:03:21.240326  746003 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 12:03:24.397221  746003 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772362
	
	I1101 12:03:24.397296  746003 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-772362"
	I1101 12:03:24.397403  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:24.416465  746003 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:24.416764  746003 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33820 <nil> <nil>}
	I1101 12:03:24.416776  746003 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-772362 && echo "default-k8s-diff-port-772362" | sudo tee /etc/hostname
	I1101 12:03:24.580348  746003 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772362
	
	I1101 12:03:24.580520  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:24.603764  746003 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:24.604076  746003 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33820 <nil> <nil>}
	I1101 12:03:24.604093  746003 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-772362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-772362/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-772362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:03:24.766090  746003 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:03:24.766119  746003 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:03:24.766155  746003 ubuntu.go:190] setting up certificates
	I1101 12:03:24.766173  746003 provision.go:84] configureAuth start
	I1101 12:03:24.766238  746003 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:03:24.787111  746003 provision.go:143] copyHostCerts
	I1101 12:03:24.787186  746003 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:03:24.787207  746003 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:03:24.787282  746003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:03:24.787375  746003 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:03:24.787386  746003 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:03:24.787415  746003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:03:24.787480  746003 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:03:24.787490  746003 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:03:24.787520  746003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:03:24.787570  746003 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-772362 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-772362 localhost minikube]
	I1101 12:03:22.242985  746742 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 12:03:22.243242  746742 start.go:159] libmachine.API.Create for "auto-507511" (driver="docker")
	I1101 12:03:22.243288  746742 client.go:173] LocalClient.Create starting
	I1101 12:03:22.243369  746742 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 12:03:22.243408  746742 main.go:143] libmachine: Decoding PEM data...
	I1101 12:03:22.243429  746742 main.go:143] libmachine: Parsing certificate...
	I1101 12:03:22.243496  746742 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 12:03:22.243518  746742 main.go:143] libmachine: Decoding PEM data...
	I1101 12:03:22.243531  746742 main.go:143] libmachine: Parsing certificate...
	I1101 12:03:22.243925  746742 cli_runner.go:164] Run: docker network inspect auto-507511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 12:03:22.259928  746742 cli_runner.go:211] docker network inspect auto-507511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 12:03:22.260015  746742 network_create.go:284] running [docker network inspect auto-507511] to gather additional debugging logs...
	I1101 12:03:22.260036  746742 cli_runner.go:164] Run: docker network inspect auto-507511
	W1101 12:03:22.275824  746742 cli_runner.go:211] docker network inspect auto-507511 returned with exit code 1
	I1101 12:03:22.275857  746742 network_create.go:287] error running [docker network inspect auto-507511]: docker network inspect auto-507511: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-507511 not found
	I1101 12:03:22.275875  746742 network_create.go:289] output of [docker network inspect auto-507511]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-507511 not found
	
	** /stderr **
	I1101 12:03:22.275966  746742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:03:22.292361  746742 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fad877b9a6cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:a4:0d:8c:c4:a0} reservation:<nil>}
	I1101 12:03:22.292694  746742 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f319e39f8d0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:35:a5:64:2d:20} reservation:<nil>}
	I1101 12:03:22.293035  746742 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce7deea9bf12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:09:be:7b:bb:7b} reservation:<nil>}
	I1101 12:03:22.293469  746742 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c89e0}
	I1101 12:03:22.293501  746742 network_create.go:124] attempt to create docker network auto-507511 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 12:03:22.293555  746742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-507511 auto-507511
	I1101 12:03:22.350678  746742 network_create.go:108] docker network auto-507511 192.168.76.0/24 created
	I1101 12:03:22.350708  746742 kic.go:121] calculated static IP "192.168.76.2" for the "auto-507511" container
	I1101 12:03:22.350794  746742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 12:03:22.366994  746742 cli_runner.go:164] Run: docker volume create auto-507511 --label name.minikube.sigs.k8s.io=auto-507511 --label created_by.minikube.sigs.k8s.io=true
	I1101 12:03:22.384910  746742 oci.go:103] Successfully created a docker volume auto-507511
	I1101 12:03:22.385000  746742 cli_runner.go:164] Run: docker run --rm --name auto-507511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-507511 --entrypoint /usr/bin/test -v auto-507511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 12:03:22.905406  746742 oci.go:107] Successfully prepared a docker volume auto-507511
	I1101 12:03:22.905462  746742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:22.905483  746742 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 12:03:22.905559  746742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-507511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 12:03:25.649686  746003 provision.go:177] copyRemoteCerts
	I1101 12:03:25.649813  746003 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:03:25.649893  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:25.668456  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:25.775225  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:03:25.798065  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 12:03:25.816522  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 12:03:25.837365  746003 provision.go:87] duration metric: took 1.071165158s to configureAuth
	I1101 12:03:25.837433  746003 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:03:25.837642  746003 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:25.837781  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:25.855533  746003 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:25.855876  746003 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33820 <nil> <nil>}
	I1101 12:03:25.855896  746003 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:03:26.329224  746003 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:03:26.329260  746003 machine.go:97] duration metric: took 5.115885408s to provisionDockerMachine
	I1101 12:03:26.329272  746003 start.go:293] postStartSetup for "default-k8s-diff-port-772362" (driver="docker")
	I1101 12:03:26.329302  746003 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:03:26.329403  746003 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:03:26.329478  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:26.351936  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:26.457490  746003 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:03:26.461100  746003 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:03:26.461132  746003 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:03:26.461146  746003 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:03:26.461201  746003 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:03:26.461290  746003 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:03:26.461393  746003 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:03:26.468825  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:26.486454  746003 start.go:296] duration metric: took 157.165741ms for postStartSetup
	I1101 12:03:26.486545  746003 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:03:26.486584  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:26.503490  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:26.607032  746003 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:03:26.611734  746003 fix.go:56] duration metric: took 5.798051625s for fixHost
	I1101 12:03:26.611806  746003 start.go:83] releasing machines lock for "default-k8s-diff-port-772362", held for 5.798156463s
	I1101 12:03:26.611903  746003 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:03:26.628286  746003 ssh_runner.go:195] Run: cat /version.json
	I1101 12:03:26.628342  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:26.628658  746003 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:03:26.628711  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:26.650476  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:26.650629  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:26.749369  746003 ssh_runner.go:195] Run: systemctl --version
	I1101 12:03:26.843112  746003 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:03:26.892577  746003 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:03:26.897357  746003 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:03:26.897478  746003 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:03:26.905377  746003 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 12:03:26.905400  746003 start.go:496] detecting cgroup driver to use...
	I1101 12:03:26.905432  746003 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:03:26.905508  746003 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:03:26.920855  746003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:03:26.936547  746003 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:03:26.936615  746003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:03:26.953008  746003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:03:26.966139  746003 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:03:27.082119  746003 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:03:27.198275  746003 docker.go:234] disabling docker service ...
	I1101 12:03:27.198354  746003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:03:27.213535  746003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:03:27.228455  746003 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:03:27.363734  746003 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:03:27.519790  746003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:03:27.538616  746003 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:03:27.554334  746003 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:03:27.554405  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.564413  746003 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:03:27.564479  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.576981  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.591160  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.610009  746003 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:03:27.619027  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.633764  746003 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.648181  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.658467  746003 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:03:27.667791  746003 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:03:27.685671  746003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:27.858008  746003 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:03:28.050352  746003 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:03:28.050431  746003 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:03:28.063083  746003 start.go:564] Will wait 60s for crictl version
	I1101 12:03:28.063141  746003 ssh_runner.go:195] Run: which crictl
	I1101 12:03:28.073712  746003 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:03:28.158893  746003 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:03:28.158974  746003 ssh_runner.go:195] Run: crio --version
	I1101 12:03:28.234757  746003 ssh_runner.go:195] Run: crio --version
	I1101 12:03:28.291235  746003 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:03:28.295116  746003 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:03:28.315140  746003 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 12:03:28.319985  746003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:28.333616  746003 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:03:28.333850  746003 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:28.333924  746003 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:28.381873  746003 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:28.381901  746003 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:03:28.381956  746003 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:28.415125  746003 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:28.415154  746003 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:03:28.415162  746003 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1101 12:03:28.415264  746003 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-772362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:03:28.415346  746003 ssh_runner.go:195] Run: crio config
	I1101 12:03:28.527753  746003 cni.go:84] Creating CNI manager for ""
	I1101 12:03:28.527787  746003 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:03:28.527808  746003 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 12:03:28.527831  746003 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-772362 NodeName:default-k8s-diff-port-772362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:03:28.527986  746003 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-772362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:03:28.528070  746003 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:03:28.539852  746003 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:03:28.539932  746003 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:03:28.550581  746003 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 12:03:28.571059  746003 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:03:28.593765  746003 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 12:03:28.614466  746003 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:03:28.619572  746003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:28.632265  746003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:28.856299  746003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:28.885412  746003 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362 for IP: 192.168.85.2
	I1101 12:03:28.885436  746003 certs.go:195] generating shared ca certs ...
	I1101 12:03:28.885453  746003 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:28.885618  746003 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:03:28.885670  746003 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:03:28.885682  746003 certs.go:257] generating profile certs ...
	I1101 12:03:28.885816  746003 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.key
	I1101 12:03:28.885897  746003 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429
	I1101 12:03:28.885944  746003 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key
	I1101 12:03:28.886085  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:03:28.886135  746003 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:03:28.886149  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:03:28.886183  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:03:28.886214  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:03:28.886240  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:03:28.886302  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:28.886968  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:03:28.913875  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:03:28.960218  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:03:28.991347  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:03:29.028732  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 12:03:29.059862  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:03:29.105614  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:03:29.148836  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 12:03:29.197749  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:03:29.225007  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:03:29.249282  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:03:29.273682  746003 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:03:29.291334  746003 ssh_runner.go:195] Run: openssl version
	I1101 12:03:29.300065  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:03:29.312423  746003 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:03:29.320183  746003 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:03:29.320252  746003 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:03:29.371110  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:03:29.381005  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:03:29.391032  746003 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:29.394931  746003 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:29.394999  746003 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:29.442049  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:03:29.450442  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:03:29.458925  746003 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:03:29.463043  746003 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:03:29.463148  746003 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:03:29.505801  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:03:29.514239  746003 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:03:29.518353  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 12:03:29.559589  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 12:03:29.601978  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 12:03:29.645474  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 12:03:29.696917  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 12:03:29.762245  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 12:03:29.854106  746003 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:03:29.854278  746003 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:03:29.854380  746003 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:03:29.903390  746003 cri.go:89] found id: "81b640d642c4a033a2066adee4e3f0b09cae8a8df5d4558591aa4e5f194359cf"
	I1101 12:03:29.903413  746003 cri.go:89] found id: "f96bb403d6b6c123105828e3f84d5ebf20a34529af731f64c66cb9c0669a5093"
	I1101 12:03:29.903428  746003 cri.go:89] found id: "302efc83dc595d0d69aa551f9cc9f21aea9f5603913f8c8a601f65423c799822"
	I1101 12:03:29.903432  746003 cri.go:89] found id: "53604a992cb8b97edf6f8b57e315089f1b817fa526ca575f87c8d55f22389249"
	I1101 12:03:29.903441  746003 cri.go:89] found id: ""
	I1101 12:03:29.903494  746003 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 12:03:29.924868  746003 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:29Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:03:29.925010  746003 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:03:29.948225  746003 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 12:03:29.948287  746003 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 12:03:29.948386  746003 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 12:03:29.963667  746003 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 12:03:29.964162  746003 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-772362" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:29.964328  746003 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-772362" cluster setting kubeconfig missing "default-k8s-diff-port-772362" context setting]
	I1101 12:03:29.964761  746003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:29.966652  746003 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 12:03:29.978385  746003 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 12:03:29.978483  746003 kubeadm.go:602] duration metric: took 30.152734ms to restartPrimaryControlPlane
	I1101 12:03:29.978511  746003 kubeadm.go:403] duration metric: took 124.415918ms to StartCluster
	I1101 12:03:29.978540  746003 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:29.978648  746003 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:29.979403  746003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:29.979682  746003 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:03:29.979997  746003 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:03:29.980071  746003 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-772362"
	I1101 12:03:29.980085  746003 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-772362"
	W1101 12:03:29.980090  746003 addons.go:248] addon storage-provisioner should already be in state true
	I1101 12:03:29.980114  746003 host.go:66] Checking if "default-k8s-diff-port-772362" exists ...
	I1101 12:03:29.980557  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:29.981236  746003 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-772362"
	I1101 12:03:29.981280  746003 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-772362"
	W1101 12:03:29.981317  746003 addons.go:248] addon dashboard should already be in state true
	I1101 12:03:29.981369  746003 host.go:66] Checking if "default-k8s-diff-port-772362" exists ...
	I1101 12:03:29.982015  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:29.982209  746003 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:29.982314  746003 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-772362"
	I1101 12:03:29.982347  746003 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-772362"
	I1101 12:03:29.982649  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:29.996498  746003 out.go:179] * Verifying Kubernetes components...
	I1101 12:03:30.003843  746003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:30.077352  746003 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 12:03:30.081032  746003 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 12:03:30.083976  746003 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-772362"
	W1101 12:03:30.084005  746003 addons.go:248] addon default-storageclass should already be in state true
	I1101 12:03:30.084033  746003 host.go:66] Checking if "default-k8s-diff-port-772362" exists ...
	I1101 12:03:30.084299  746003 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:30.084316  746003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:03:30.084385  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:30.084941  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:30.085185  746003 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 12:03:30.090909  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 12:03:30.090948  746003 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 12:03:30.091033  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:30.137971  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:30.144443  746003 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:30.144467  746003 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:03:30.144550  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:30.173770  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:30.183901  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:30.364022  746003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:30.387453  746003 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-772362" to be "Ready" ...
	I1101 12:03:30.422081  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 12:03:30.422107  746003 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 12:03:30.426490  746003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:30.432257  746003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:27.425343  746742 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-507511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.519746198s)
	I1101 12:03:27.425374  746742 kic.go:203] duration metric: took 4.519887353s to extract preloaded images to volume ...
	W1101 12:03:27.425519  746742 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 12:03:27.425640  746742 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 12:03:27.532467  746742 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-507511 --name auto-507511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-507511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-507511 --network auto-507511 --ip 192.168.76.2 --volume auto-507511:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 12:03:27.896869  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Running}}
	I1101 12:03:27.927771  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:03:27.960296  746742 cli_runner.go:164] Run: docker exec auto-507511 stat /var/lib/dpkg/alternatives/iptables
	I1101 12:03:28.024438  746742 oci.go:144] the created container "auto-507511" has a running status.
	I1101 12:03:28.024465  746742 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa...
	I1101 12:03:28.061203  746742 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 12:03:28.090777  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:03:28.114759  746742 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 12:03:28.114888  746742 kic_runner.go:114] Args: [docker exec --privileged auto-507511 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 12:03:28.174501  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:03:28.205683  746742 machine.go:94] provisionDockerMachine start ...
	I1101 12:03:28.205784  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:28.231212  746742 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:28.231550  746742 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33825 <nil> <nil>}
	I1101 12:03:28.231567  746742 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:03:28.232258  746742 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58626->127.0.0.1:33825: read: connection reset by peer
	I1101 12:03:31.413408  746742 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-507511
	
	I1101 12:03:31.413483  746742 ubuntu.go:182] provisioning hostname "auto-507511"
	I1101 12:03:31.413576  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:31.448109  746742 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:31.448426  746742 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33825 <nil> <nil>}
	I1101 12:03:31.448436  746742 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-507511 && echo "auto-507511" | sudo tee /etc/hostname
	I1101 12:03:31.640645  746742 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-507511
	
	I1101 12:03:31.640717  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:31.664002  746742 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:31.664322  746742 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33825 <nil> <nil>}
	I1101 12:03:31.664345  746742 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-507511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-507511/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-507511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:03:31.860002  746742 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:03:31.860079  746742 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:03:31.860145  746742 ubuntu.go:190] setting up certificates
	I1101 12:03:31.860193  746742 provision.go:84] configureAuth start
	I1101 12:03:31.860282  746742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-507511
	I1101 12:03:31.887253  746742 provision.go:143] copyHostCerts
	I1101 12:03:31.887331  746742 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:03:31.887347  746742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:03:31.887424  746742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:03:31.887517  746742 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:03:31.887529  746742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:03:31.887557  746742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:03:31.887621  746742 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:03:31.887631  746742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:03:31.887657  746742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:03:31.887711  746742 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.auto-507511 san=[127.0.0.1 192.168.76.2 auto-507511 localhost minikube]
	I1101 12:03:30.463402  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 12:03:30.463426  746003 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 12:03:30.562473  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 12:03:30.562495  746003 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 12:03:30.639094  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 12:03:30.639117  746003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 12:03:30.691015  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 12:03:30.691037  746003 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 12:03:30.717247  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 12:03:30.717269  746003 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 12:03:30.739585  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 12:03:30.739663  746003 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 12:03:30.766510  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 12:03:30.766537  746003 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 12:03:30.791559  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:03:30.791581  746003 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 12:03:30.820144  746003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:03:35.215847  746003 node_ready.go:49] node "default-k8s-diff-port-772362" is "Ready"
	I1101 12:03:35.215877  746003 node_ready.go:38] duration metric: took 4.828391439s for node "default-k8s-diff-port-772362" to be "Ready" ...
	I1101 12:03:35.215892  746003 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:03:35.215946  746003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:03:32.637175  746742 provision.go:177] copyRemoteCerts
	I1101 12:03:32.637252  746742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:03:32.637298  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:32.654947  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:32.770970  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:03:32.799530  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 12:03:32.828512  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 12:03:32.860816  746742 provision.go:87] duration metric: took 1.000586236s to configureAuth
	I1101 12:03:32.860845  746742 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:03:32.861030  746742 config.go:182] Loaded profile config "auto-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:32.861159  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:32.886890  746742 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:32.887209  746742 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33825 <nil> <nil>}
	I1101 12:03:32.887236  746742 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:03:33.257134  746742 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:03:33.257154  746742 machine.go:97] duration metric: took 5.051437094s to provisionDockerMachine
	I1101 12:03:33.257164  746742 client.go:176] duration metric: took 11.013864269s to LocalClient.Create
	I1101 12:03:33.257185  746742 start.go:167] duration metric: took 11.013944779s to libmachine.API.Create "auto-507511"
	I1101 12:03:33.257193  746742 start.go:293] postStartSetup for "auto-507511" (driver="docker")
	I1101 12:03:33.257216  746742 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:03:33.257282  746742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:03:33.257335  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:33.286798  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:33.402180  746742 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:03:33.406201  746742 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:03:33.406233  746742 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:03:33.406245  746742 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:03:33.406299  746742 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:03:33.406393  746742 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:03:33.406506  746742 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:03:33.422950  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:33.461042  746742 start.go:296] duration metric: took 203.81518ms for postStartSetup
	I1101 12:03:33.461403  746742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-507511
	I1101 12:03:33.489873  746742 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/config.json ...
	I1101 12:03:33.490162  746742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:03:33.490212  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:33.519938  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:33.642252  746742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:03:33.647375  746742 start.go:128] duration metric: took 11.407833277s to createHost
	I1101 12:03:33.647402  746742 start.go:83] releasing machines lock for "auto-507511", held for 11.407979683s
	I1101 12:03:33.647472  746742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-507511
	I1101 12:03:33.689218  746742 ssh_runner.go:195] Run: cat /version.json
	I1101 12:03:33.689275  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:33.689524  746742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:03:33.689573  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:33.721679  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:33.729246  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:33.846422  746742 ssh_runner.go:195] Run: systemctl --version
	I1101 12:03:33.940645  746742 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:03:34.012304  746742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:03:34.026408  746742 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:03:34.026558  746742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:03:34.071581  746742 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 12:03:34.071657  746742 start.go:496] detecting cgroup driver to use...
	I1101 12:03:34.071738  746742 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:03:34.071833  746742 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:03:34.100138  746742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:03:34.119565  746742 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:03:34.119678  746742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:03:34.141647  746742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:03:34.166530  746742 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:03:34.353783  746742 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:03:34.552746  746742 docker.go:234] disabling docker service ...
	I1101 12:03:34.552863  746742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:03:34.597193  746742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:03:34.614615  746742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:03:34.829468  746742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:03:35.038089  746742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:03:35.060073  746742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:03:35.083598  746742 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:03:35.083762  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.103058  746742 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:03:35.103193  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.125884  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.142444  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.158341  746742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:03:35.171774  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.184753  746742 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.204523  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.213313  746742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:03:35.224720  746742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:03:35.232775  746742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:35.475400  746742 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:03:35.673349  746742 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:03:35.673480  746742 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:03:35.681964  746742 start.go:564] Will wait 60s for crictl version
	I1101 12:03:35.682127  746742 ssh_runner.go:195] Run: which crictl
	I1101 12:03:35.686401  746742 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:03:35.734373  746742 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:03:35.734518  746742 ssh_runner.go:195] Run: crio --version
	I1101 12:03:35.793047  746742 ssh_runner.go:195] Run: crio --version
	I1101 12:03:35.846641  746742 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:03:35.849743  746742 cli_runner.go:164] Run: docker network inspect auto-507511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:03:35.875396  746742 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 12:03:35.879510  746742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:35.894939  746742 kubeadm.go:884] updating cluster {Name:auto-507511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:03:35.895051  746742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:35.895114  746742 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:35.986961  746742 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:35.986993  746742 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:03:35.987056  746742 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:36.034472  746742 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:36.034493  746742 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:03:36.034501  746742 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 12:03:36.034605  746742 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-507511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:03:36.034701  746742 ssh_runner.go:195] Run: crio config
	I1101 12:03:36.114664  746742 cni.go:84] Creating CNI manager for ""
	I1101 12:03:36.114735  746742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:03:36.114766  746742 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 12:03:36.114820  746742 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-507511 NodeName:auto-507511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:03:36.115007  746742 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-507511"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:03:36.115128  746742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:03:36.126958  746742 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:03:36.127084  746742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:03:36.138336  746742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1101 12:03:36.160169  746742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:03:36.178653  746742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 12:03:36.198734  746742 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:03:36.206185  746742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:36.220024  746742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:36.442199  746742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:36.468984  746742 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511 for IP: 192.168.76.2
	I1101 12:03:36.469054  746742 certs.go:195] generating shared ca certs ...
	I1101 12:03:36.469095  746742 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:36.469320  746742 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:03:36.469399  746742 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:03:36.469436  746742 certs.go:257] generating profile certs ...
	I1101 12:03:36.469537  746742 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.key
	I1101 12:03:36.469571  746742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt with IP's: []
	I1101 12:03:36.808895  746742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt ...
	I1101 12:03:36.808969  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: {Name:mke85943a1ddcd8947f7a6c6f17da07a2243466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:36.809230  746742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.key ...
	I1101 12:03:36.809269  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.key: {Name:mk40b2401486d39fbe7705acf3aeecd9bba8c5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:36.809426  746742 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key.375f5936
	I1101 12:03:36.809465  746742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt.375f5936 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 12:03:37.804168  746003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.377645262s)
	I1101 12:03:37.804216  746003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.371928331s)
	I1101 12:03:37.804299  746003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.984056338s)
	I1101 12:03:37.804487  746003 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.58852359s)
	I1101 12:03:37.804505  746003 api_server.go:72] duration metric: took 7.824762681s to wait for apiserver process to appear ...
	I1101 12:03:37.804511  746003 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:03:37.804527  746003 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 12:03:37.807530  746003 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-772362 addons enable metrics-server
	
	I1101 12:03:37.817875  746003 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 12:03:37.708298  746742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt.375f5936 ...
	I1101 12:03:37.708381  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt.375f5936: {Name:mk2083f04eb2884da13e99c217ade00c345e9b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:37.708607  746742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key.375f5936 ...
	I1101 12:03:37.708646  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key.375f5936: {Name:mk211898e90a6e8188a0f27882c9bf7d432072a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:37.708789  746742 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt.375f5936 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt
	I1101 12:03:37.708912  746742 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key.375f5936 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key
	I1101 12:03:37.709017  746742 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.key
	I1101 12:03:37.709052  746742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.crt with IP's: []
	I1101 12:03:38.032606  746742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.crt ...
	I1101 12:03:38.032683  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.crt: {Name:mk047b93bd3ece489cb88d3e9645f31fb0582f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:38.032911  746742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.key ...
	I1101 12:03:38.032945  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.key: {Name:mk904b45d74a72ecb710b0507fa4c090f5a19ab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:38.033206  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:03:38.033276  746742 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:03:38.033317  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:03:38.033366  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:03:38.033421  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:03:38.033472  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:03:38.033555  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:38.034626  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:03:38.058949  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:03:38.086804  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:03:38.109897  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:03:38.139228  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 12:03:38.178580  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 12:03:38.221549  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:03:38.258367  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 12:03:38.287123  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:03:38.308066  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:03:38.330896  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:03:38.352081  746742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:03:38.366860  746742 ssh_runner.go:195] Run: openssl version
	I1101 12:03:38.373325  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:03:38.383487  746742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:03:38.387330  746742 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:03:38.387396  746742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:03:38.428805  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:03:38.437517  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:03:38.446716  746742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:03:38.450450  746742 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:03:38.450557  746742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:03:38.498332  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:03:38.507191  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:03:38.517167  746742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:38.521199  746742 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:38.521294  746742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:38.562811  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:03:38.571574  746742 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:03:38.575121  746742 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 12:03:38.575220  746742 kubeadm.go:401] StartCluster: {Name:auto-507511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:03:38.575300  746742 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:03:38.575369  746742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:03:38.603248  746742 cri.go:89] found id: ""
	I1101 12:03:38.603329  746742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:03:38.611748  746742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 12:03:38.620481  746742 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 12:03:38.620545  746742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 12:03:38.628592  746742 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 12:03:38.628611  746742 kubeadm.go:158] found existing configuration files:
	
	I1101 12:03:38.628664  746742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 12:03:38.636484  746742 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 12:03:38.636581  746742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 12:03:38.644255  746742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 12:03:38.652292  746742 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 12:03:38.652392  746742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 12:03:38.660035  746742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 12:03:38.668901  746742 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 12:03:38.669018  746742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 12:03:38.676961  746742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 12:03:38.684930  746742 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 12:03:38.684994  746742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 12:03:38.693180  746742 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 12:03:38.737101  746742 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 12:03:38.737265  746742 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 12:03:38.766838  746742 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 12:03:38.766921  746742 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 12:03:38.766966  746742 kubeadm.go:319] OS: Linux
	I1101 12:03:38.767021  746742 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 12:03:38.767076  746742 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 12:03:38.767155  746742 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 12:03:38.767248  746742 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 12:03:38.767377  746742 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 12:03:38.767456  746742 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 12:03:38.767510  746742 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 12:03:38.767569  746742 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 12:03:38.767630  746742 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 12:03:38.843938  746742 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 12:03:38.844556  746742 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 12:03:38.844776  746742 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 12:03:38.854765  746742 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 12:03:37.818501  746003 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 12:03:37.819682  746003 api_server.go:141] control plane version: v1.34.1
	I1101 12:03:37.819705  746003 api_server.go:131] duration metric: took 15.188389ms to wait for apiserver health ...
	I1101 12:03:37.819715  746003 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:03:37.821037  746003 addons.go:515] duration metric: took 7.841040635s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 12:03:37.824402  746003 system_pods.go:59] 8 kube-system pods found
	I1101 12:03:37.824441  746003 system_pods.go:61] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:03:37.824455  746003 system_pods.go:61] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:03:37.824461  746003 system_pods.go:61] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:03:37.824472  746003 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:03:37.824481  746003 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:03:37.824490  746003 system_pods.go:61] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:03:37.824496  746003 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:03:37.824501  746003 system_pods.go:61] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Running
	I1101 12:03:37.824512  746003 system_pods.go:74] duration metric: took 4.787754ms to wait for pod list to return data ...
	I1101 12:03:37.824520  746003 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:03:37.827079  746003 default_sa.go:45] found service account: "default"
	I1101 12:03:37.827101  746003 default_sa.go:55] duration metric: took 2.575213ms for default service account to be created ...
	I1101 12:03:37.827110  746003 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 12:03:37.830139  746003 system_pods.go:86] 8 kube-system pods found
	I1101 12:03:37.830170  746003 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:03:37.830182  746003 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:03:37.830191  746003 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:03:37.830202  746003 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:03:37.830213  746003 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:03:37.830219  746003 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:03:37.830231  746003 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:03:37.830238  746003 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Running
	I1101 12:03:37.830250  746003 system_pods.go:126] duration metric: took 3.132225ms to wait for k8s-apps to be running ...
	I1101 12:03:37.830263  746003 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 12:03:37.830317  746003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:03:37.874311  746003 system_svc.go:56] duration metric: took 44.037813ms WaitForService to wait for kubelet
	I1101 12:03:37.874344  746003 kubeadm.go:587] duration metric: took 7.894599768s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:03:37.874365  746003 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:03:37.878659  746003 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:03:37.878685  746003 node_conditions.go:123] node cpu capacity is 2
	I1101 12:03:37.878697  746003 node_conditions.go:105] duration metric: took 4.326529ms to run NodePressure ...
	I1101 12:03:37.878709  746003 start.go:242] waiting for startup goroutines ...
	I1101 12:03:37.878716  746003 start.go:247] waiting for cluster config update ...
	I1101 12:03:37.878727  746003 start.go:256] writing updated cluster config ...
	I1101 12:03:37.879112  746003 ssh_runner.go:195] Run: rm -f paused
	I1101 12:03:37.892055  746003 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:03:37.923370  746003 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 12:03:39.958357  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:03:38.860979  746742 out.go:252]   - Generating certificates and keys ...
	I1101 12:03:38.861159  746742 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 12:03:38.861266  746742 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 12:03:39.380687  746742 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 12:03:39.634604  746742 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 12:03:40.105493  746742 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 12:03:40.767667  746742 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 12:03:41.829645  746742 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 12:03:41.830046  746742 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-507511 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1101 12:03:42.435050  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:03:44.930246  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:03:42.512918  746742 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 12:03:42.513226  746742 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-507511 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 12:03:43.308352  746742 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 12:03:44.213966  746742 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 12:03:44.801543  746742 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 12:03:44.801933  746742 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 12:03:45.313841  746742 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 12:03:46.089681  746742 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 12:03:46.324376  746742 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 12:03:46.670361  746742 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 12:03:48.108342  746742 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 12:03:48.109281  746742 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 12:03:48.118425  746742 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 12:03:46.943176  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:03:49.430021  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:03:48.122006  746742 out.go:252]   - Booting up control plane ...
	I1101 12:03:48.122128  746742 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 12:03:48.122227  746742 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 12:03:48.123285  746742 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 12:03:48.147456  746742 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 12:03:48.147913  746742 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 12:03:48.159867  746742 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 12:03:48.160618  746742 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 12:03:48.160920  746742 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 12:03:48.346294  746742 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 12:03:48.346426  746742 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 12:03:50.346039  746742 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000908685s
	I1101 12:03:50.347351  746742 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 12:03:50.347647  746742 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 12:03:50.347945  746742 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 12:03:50.348768  746742 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 12:03:51.434125  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:03:53.930065  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:03:55.492503  746742 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.143342891s
	I1101 12:03:57.008856  746742 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.65945115s
	I1101 12:03:58.850917  746742 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502540023s
	I1101 12:03:58.886434  746742 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 12:03:58.908026  746742 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 12:03:58.937000  746742 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 12:03:58.937254  746742 kubeadm.go:319] [mark-control-plane] Marking the node auto-507511 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 12:03:58.951671  746742 kubeadm.go:319] [bootstrap-token] Using token: grauow.5xc8kyq1ucth3q8o
	I1101 12:03:58.954604  746742 out.go:252]   - Configuring RBAC rules ...
	I1101 12:03:58.954748  746742 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 12:03:58.966536  746742 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 12:03:58.978232  746742 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 12:03:58.984935  746742 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 12:03:58.990008  746742 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 12:03:58.997254  746742 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 12:03:59.258016  746742 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 12:03:59.715270  746742 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 12:04:00.282756  746742 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 12:04:00.295126  746742 kubeadm.go:319] 
	I1101 12:04:00.295252  746742 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 12:04:00.295298  746742 kubeadm.go:319] 
	I1101 12:04:00.295438  746742 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 12:04:00.295452  746742 kubeadm.go:319] 
	I1101 12:04:00.295507  746742 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 12:04:00.295658  746742 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 12:04:00.295720  746742 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 12:04:00.295726  746742 kubeadm.go:319] 
	I1101 12:04:00.295784  746742 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 12:04:00.295788  746742 kubeadm.go:319] 
	I1101 12:04:00.295843  746742 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 12:04:00.295848  746742 kubeadm.go:319] 
	I1101 12:04:00.295904  746742 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 12:04:00.295984  746742 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 12:04:00.296064  746742 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 12:04:00.296069  746742 kubeadm.go:319] 
	I1101 12:04:00.296160  746742 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 12:04:00.296243  746742 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 12:04:00.296248  746742 kubeadm.go:319] 
	I1101 12:04:00.296337  746742 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token grauow.5xc8kyq1ucth3q8o \
	I1101 12:04:00.296449  746742 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 12:04:00.296471  746742 kubeadm.go:319] 	--control-plane 
	I1101 12:04:00.296476  746742 kubeadm.go:319] 
	I1101 12:04:00.296567  746742 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 12:04:00.296572  746742 kubeadm.go:319] 
	I1101 12:04:00.296660  746742 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token grauow.5xc8kyq1ucth3q8o \
	I1101 12:04:00.296769  746742 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 12:04:00.318431  746742 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 12:04:00.318673  746742 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 12:04:00.318784  746742 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 12:04:00.318801  746742 cni.go:84] Creating CNI manager for ""
	I1101 12:04:00.318809  746742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:04:00.341343  746742 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 12:03:56.429343  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:03:58.930390  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:04:00.366934  746742 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 12:04:00.376767  746742 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 12:04:00.376791  746742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 12:04:00.429793  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 12:04:00.782580  746742 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 12:04:00.782717  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:00.782791  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-507511 minikube.k8s.io/updated_at=2025_11_01T12_04_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=auto-507511 minikube.k8s.io/primary=true
	I1101 12:04:01.005267  746742 ops.go:34] apiserver oom_adj: -16
	I1101 12:04:01.005397  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:01.505730  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:02.011190  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:02.506214  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:03.005882  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:03.505464  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:04.006756  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:04.505972  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:05.006715  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:05.144132  746742 kubeadm.go:1114] duration metric: took 4.361467488s to wait for elevateKubeSystemPrivileges
	I1101 12:04:05.144164  746742 kubeadm.go:403] duration metric: took 26.568948395s to StartCluster
	I1101 12:04:05.144181  746742 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:04:05.144244  746742 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:04:05.145218  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:04:05.145431  746742 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:04:05.145547  746742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 12:04:05.145808  746742 config.go:182] Loaded profile config "auto-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:04:05.145848  746742 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:04:05.145915  746742 addons.go:70] Setting storage-provisioner=true in profile "auto-507511"
	I1101 12:04:05.145929  746742 addons.go:239] Setting addon storage-provisioner=true in "auto-507511"
	I1101 12:04:05.145958  746742 host.go:66] Checking if "auto-507511" exists ...
	I1101 12:04:05.146671  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:04:05.146860  746742 addons.go:70] Setting default-storageclass=true in profile "auto-507511"
	I1101 12:04:05.146884  746742 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-507511"
	I1101 12:04:05.147134  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:04:05.149040  746742 out.go:179] * Verifying Kubernetes components...
	I1101 12:04:05.155964  746742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:04:05.188798  746742 addons.go:239] Setting addon default-storageclass=true in "auto-507511"
	I1101 12:04:05.188843  746742 host.go:66] Checking if "auto-507511" exists ...
	I1101 12:04:05.189277  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:04:05.205514  746742 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1101 12:04:00.931045  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:04:03.428767  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:04:05.439940  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:04:05.209006  746742 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:04:05.209027  746742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:04:05.209099  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:04:05.227786  746742 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:04:05.227808  746742 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:04:05.227878  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:04:05.254172  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:04:05.270053  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:04:05.520744  746742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 12:04:05.524798  746742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:04:05.580633  746742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:04:05.657847  746742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:04:05.911440  746742 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 12:04:05.912343  746742 node_ready.go:35] waiting up to 15m0s for node "auto-507511" to be "Ready" ...
	I1101 12:04:06.343998  746742 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 12:04:06.346832  746742 addons.go:515] duration metric: took 1.200958109s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 12:04:06.416693  746742 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-507511" context rescaled to 1 replicas
	W1101 12:04:07.929801  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:04:10.428570  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:04:07.915290  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:10.415398  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:12.429083  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:04:14.429014  746003 pod_ready.go:94] pod "coredns-66bc5c9577-czvv4" is "Ready"
	I1101 12:04:14.429049  746003 pod_ready.go:86] duration metric: took 36.505651201s for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.432116  746003 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.437117  746003 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772362" is "Ready"
	I1101 12:04:14.437146  746003 pod_ready.go:86] duration metric: took 5.001353ms for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.439680  746003 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.445285  746003 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772362" is "Ready"
	I1101 12:04:14.445316  746003 pod_ready.go:86] duration metric: took 5.604454ms for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.448379  746003 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.627883  746003 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772362" is "Ready"
	I1101 12:04:14.627911  746003 pod_ready.go:86] duration metric: took 179.501399ms for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.827275  746003 pod_ready.go:83] waiting for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:15.227163  746003 pod_ready.go:94] pod "kube-proxy-7bbw7" is "Ready"
	I1101 12:04:15.227241  746003 pod_ready.go:86] duration metric: took 399.941443ms for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:15.427978  746003 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:15.827500  746003 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772362" is "Ready"
	I1101 12:04:15.827526  746003 pod_ready.go:86] duration metric: took 399.520317ms for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:15.827545  746003 pod_ready.go:40] duration metric: took 37.935459647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:04:15.891669  746003 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:04:15.895553  746003 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772362" cluster and "default" namespace by default
	W1101 12:04:12.915481  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:14.915637  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:17.416292  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:19.418186  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:21.918734  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:24.415968  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:26.416592  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.203924254Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f2502514-ff8a-4880-8ae9-bd952e958343 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.205224275Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=582c6520-0919-4534-b6e2-63b4d85acdde name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.206361365Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm/dashboard-metrics-scraper" id=9421eb4b-f8b5-4a3a-a205-957108455c95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.206551783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.213605938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.214178458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.234039277Z" level=info msg="Created container 317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm/dashboard-metrics-scraper" id=9421eb4b-f8b5-4a3a-a205-957108455c95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.236762709Z" level=info msg="Starting container: 317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036" id=c4597824-fea4-4c2d-bd1d-16bcd531850f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.23918519Z" level=info msg="Started container" PID=1637 containerID=317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm/dashboard-metrics-scraper id=c4597824-fea4-4c2d-bd1d-16bcd531850f name=/runtime.v1.RuntimeService/StartContainer sandboxID=c49a469a94bac7d829326ac0a6ce0a2d1c8f3d62891d4741fdf7d45a2ec4d088
	Nov 01 12:04:15 default-k8s-diff-port-772362 conmon[1635]: conmon 317a3675c8312fcb66af <ninfo>: container 1637 exited with status 1
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.522179988Z" level=info msg="Removing container: 23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435" id=5ec5eca1-06a6-4fa8-87cb-ddd920269453 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.53620554Z" level=info msg="Error loading conmon cgroup of container 23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435: cgroup deleted" id=5ec5eca1-06a6-4fa8-87cb-ddd920269453 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.544663758Z" level=info msg="Removed container 23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm/dashboard-metrics-scraper" id=5ec5eca1-06a6-4fa8-87cb-ddd920269453 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.603769244Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.607556083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.607591915Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.607626082Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.615113358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.615146105Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.615168087Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.619469911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.619502108Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.619534043Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.623522919Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.623556651Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	317a3675c8312       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago       Exited              dashboard-metrics-scraper   2                   c49a469a94bac       dashboard-metrics-scraper-6ffb444bf9-z2qgm             kubernetes-dashboard
	ccb3e9649abb4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   f5f4af7d6a62e       storage-provisioner                                    kube-system
	866787adebf45       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   46650bc0c4e7c       kubernetes-dashboard-855c9754f9-v9lb6                  kubernetes-dashboard
	e7759628be0ba       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   46263cbf27492       busybox                                                default
	60d058208068e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   18c553f1b7c16       kube-proxy-7bbw7                                       kube-system
	1045dd3947bb8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   34743e04d1d15       kindnet-88g26                                          kube-system
	ae1f673a830aa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   0b3240a42540b       coredns-66bc5c9577-czvv4                               kube-system
	00aed308344f0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   f5f4af7d6a62e       storage-provisioner                                    kube-system
	81b640d642c4a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   92f65748e2dc8       etcd-default-k8s-diff-port-772362                      kube-system
	f96bb403d6b6c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   45b2226b856a1       kube-scheduler-default-k8s-diff-port-772362            kube-system
	302efc83dc595       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   888ffdba79a6a       kube-apiserver-default-k8s-diff-port-772362            kube-system
	53604a992cb8b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   3eda55d992bb7       kube-controller-manager-default-k8s-diff-port-772362   kube-system
	
	
	==> coredns [ae1f673a830aae14249b0aa15c1f704cf4fe946dada0b3da9657525bdd91b06e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49838 - 13649 "HINFO IN 8702520038172837420.7295187200054632376. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014080313s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-772362
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-772362
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=default-k8s-diff-port-772362
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T12_02_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 12:02:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-772362
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:04:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:04:06 +0000   Sat, 01 Nov 2025 12:01:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:04:06 +0000   Sat, 01 Nov 2025 12:01:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:04:06 +0000   Sat, 01 Nov 2025 12:01:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 12:04:06 +0000   Sat, 01 Nov 2025 12:02:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-772362
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                42af9bdf-2107-489d-bce0-eb773b707372
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-czvv4                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-772362                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-88g26                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-772362             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-772362    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-7bbw7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-772362             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-z2qgm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v9lb6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-772362 event: Registered Node default-k8s-diff-port-772362 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-772362 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node default-k8s-diff-port-772362 event: Registered Node default-k8s-diff-port-772362 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	[ +52.263508] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:02] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:03] overlayfs: idmapped layers are currently not supported
	[ +26.269036] overlayfs: idmapped layers are currently not supported
	[ +20.854556] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [81b640d642c4a033a2066adee4e3f0b09cae8a8df5d4558591aa4e5f194359cf] <==
	{"level":"warn","ts":"2025-11-01T12:03:32.894497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:32.920801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:32.950295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:32.988720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.018864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.039982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.054757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.070301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.089138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.116253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.139024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.170432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.196523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.237577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.267756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.299952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.330434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.355415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.362888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.418169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.452701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.478465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.508213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.528631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.593241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47308","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:04:31 up  3:47,  0 user,  load average: 2.89, 3.63, 3.04
	Linux default-k8s-diff-port-772362 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1045dd3947bb80515dc0cc7a58d04eef3d54108be2c3a2a779a3731110c50a24] <==
	I1101 12:03:36.371454       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:03:36.378009       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 12:03:36.378151       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:03:36.378166       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:03:36.378187       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:03:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:03:36.606499       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:03:36.606525       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:03:36.606533       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:03:36.606811       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 12:04:06.603439       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 12:04:06.607406       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 12:04:06.607475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 12:04:06.607587       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 12:04:07.707135       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 12:04:07.707174       1 metrics.go:72] Registering metrics
	I1101 12:04:07.707260       1 controller.go:711] "Syncing nftables rules"
	I1101 12:04:16.603301       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:04:16.603356       1 main.go:301] handling current node
	I1101 12:04:26.605339       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:04:26.605374       1 main.go:301] handling current node
	
	
	==> kube-apiserver [302efc83dc595d0d69aa551f9cc9f21aea9f5603913f8c8a601f65423c799822] <==
	I1101 12:03:35.290106       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 12:03:35.290169       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 12:03:35.295688       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 12:03:35.296266       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 12:03:35.296576       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 12:03:35.297117       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:03:35.297281       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 12:03:35.297319       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 12:03:35.302822       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 12:03:35.303267       1 aggregator.go:171] initial CRD sync complete...
	I1101 12:03:35.303278       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 12:03:35.303284       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 12:03:35.303290       1 cache.go:39] Caches are synced for autoregister controller
	I1101 12:03:35.388809       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1101 12:03:35.531251       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 12:03:35.679063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:03:37.331571       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 12:03:37.462292       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:03:37.532500       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:03:37.550477       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:03:37.689816       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.231.183"}
	I1101 12:03:37.714974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.136.61"}
	I1101 12:03:39.395097       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 12:03:39.872673       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:03:39.942705       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [53604a992cb8b97edf6f8b57e315089f1b817fa526ca575f87c8d55f22389249] <==
	I1101 12:03:39.381111       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 12:03:39.381121       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 12:03:39.381129       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 12:03:39.381139       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 12:03:39.392997       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 12:03:39.393398       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 12:03:39.411059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 12:03:39.415261       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 12:03:39.415349       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 12:03:39.415363       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 12:03:39.415425       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 12:03:39.415440       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 12:03:39.415462       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:03:39.416326       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 12:03:39.416357       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 12:03:39.420606       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:03:39.423768       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 12:03:39.428452       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 12:03:39.428578       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 12:03:39.428756       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 12:03:39.428892       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 12:03:39.428926       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 12:03:39.438552       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:03:39.439831       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 12:03:39.439965       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [60d058208068e15a38ab1917ed435ff30df2904bc304c752ea4a5232e31e1ff9] <==
	I1101 12:03:37.209849       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:03:37.619987       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:03:37.720373       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:03:37.720412       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 12:03:37.720478       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:03:37.867070       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:03:37.867190       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:03:37.871237       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:03:37.871601       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:03:37.871773       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:03:37.873020       1 config.go:200] "Starting service config controller"
	I1101 12:03:37.873086       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:03:37.873129       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:03:37.873155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:03:37.873189       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:03:37.873216       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:03:37.880887       1 config.go:309] "Starting node config controller"
	I1101 12:03:37.880967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:03:37.880999       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:03:37.973456       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:03:37.973566       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 12:03:37.973588       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f96bb403d6b6c123105828e3f84d5ebf20a34529af731f64c66cb9c0669a5093] <==
	I1101 12:03:33.433411       1 serving.go:386] Generated self-signed cert in-memory
	I1101 12:03:36.388992       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 12:03:36.389042       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:03:36.418305       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 12:03:36.418368       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 12:03:36.418404       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 12:03:36.418562       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 12:03:36.418418       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:36.419727       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:36.418424       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:36.419759       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:36.633376       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 12:03:36.648426       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:36.648584       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:40.173338     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e4488a24-15da-4027-9207-87a2d638e13e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-v9lb6\" (UID: \"e4488a24-15da-4027-9207-87a2d638e13e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v9lb6"
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:40.173899     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hsh\" (UniqueName: \"kubernetes.io/projected/e4488a24-15da-4027-9207-87a2d638e13e-kube-api-access-54hsh\") pod \"kubernetes-dashboard-855c9754f9-v9lb6\" (UID: \"e4488a24-15da-4027-9207-87a2d638e13e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v9lb6"
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:40.174011     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xwcn\" (UniqueName: \"kubernetes.io/projected/616e24fd-597d-46ae-9f4c-55f05922d927-kube-api-access-9xwcn\") pod \"dashboard-metrics-scraper-6ffb444bf9-z2qgm\" (UID: \"616e24fd-597d-46ae-9f4c-55f05922d927\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm"
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:40.174110     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/616e24fd-597d-46ae-9f4c-55f05922d927-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-z2qgm\" (UID: \"616e24fd-597d-46ae-9f4c-55f05922d927\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm"
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: W1101 12:03:40.514813     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/crio-c49a469a94bac7d829326ac0a6ce0a2d1c8f3d62891d4741fdf7d45a2ec4d088 WatchSource:0}: Error finding container c49a469a94bac7d829326ac0a6ce0a2d1c8f3d62891d4741fdf7d45a2ec4d088: Status 404 returned error can't find the container with id c49a469a94bac7d829326ac0a6ce0a2d1c8f3d62891d4741fdf7d45a2ec4d088
	Nov 01 12:03:44 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:44.206235     772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 12:03:47 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:47.450458     772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v9lb6" podStartSLOduration=2.105514963 podStartE2EDuration="8.450439982s" podCreationTimestamp="2025-11-01 12:03:39 +0000 UTC" firstStartedPulling="2025-11-01 12:03:40.483917093 +0000 UTC m=+11.598584616" lastFinishedPulling="2025-11-01 12:03:46.828842104 +0000 UTC m=+17.943509635" observedRunningTime="2025-11-01 12:03:47.450382308 +0000 UTC m=+18.565049839" watchObservedRunningTime="2025-11-01 12:03:47.450439982 +0000 UTC m=+18.565107505"
	Nov 01 12:03:53 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:53.452580     772 scope.go:117] "RemoveContainer" containerID="e756118195f3e5657015c3f8b4fdc9a267c22c97d5a004951dcb0db78b98f40c"
	Nov 01 12:03:54 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:54.456447     772 scope.go:117] "RemoveContainer" containerID="e756118195f3e5657015c3f8b4fdc9a267c22c97d5a004951dcb0db78b98f40c"
	Nov 01 12:03:54 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:54.456736     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:03:54 default-k8s-diff-port-772362 kubelet[772]: E1101 12:03:54.456879     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:03:55 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:55.460182     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:03:55 default-k8s-diff-port-772362 kubelet[772]: E1101 12:03:55.460350     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:04:00 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:00.367502     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:04:00 default-k8s-diff-port-772362 kubelet[772]: E1101 12:04:00.367715     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:04:06 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:06.487193     772 scope.go:117] "RemoveContainer" containerID="00aed308344f086574af655c9996a7b641715d301430dc08c96ff996ef60c175"
	Nov 01 12:04:15 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:15.203200     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:04:15 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:15.519318     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:04:15 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:15.519609     772 scope.go:117] "RemoveContainer" containerID="317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036"
	Nov 01 12:04:15 default-k8s-diff-port-772362 kubelet[772]: E1101 12:04:15.519775     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:04:20 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:20.362573     772 scope.go:117] "RemoveContainer" containerID="317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036"
	Nov 01 12:04:20 default-k8s-diff-port-772362 kubelet[772]: E1101 12:04:20.362757     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:04:28 default-k8s-diff-port-772362 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 12:04:28 default-k8s-diff-port-772362 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 12:04:28 default-k8s-diff-port-772362 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [866787adebf458b16bf91276a5d497a0448a1e79a43137ae5cc98aedb84d2c3c] <==
	2025/11/01 12:03:46 Using namespace: kubernetes-dashboard
	2025/11/01 12:03:46 Using in-cluster config to connect to apiserver
	2025/11/01 12:03:46 Using secret token for csrf signing
	2025/11/01 12:03:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 12:03:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 12:03:46 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 12:03:46 Generating JWE encryption key
	2025/11/01 12:03:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 12:03:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 12:03:47 Initializing JWE encryption key from synchronized object
	2025/11/01 12:03:47 Creating in-cluster Sidecar client
	2025/11/01 12:03:47 Serving insecurely on HTTP port: 9090
	2025/11/01 12:03:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:04:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:03:46 Starting overwatch
	
	
	==> storage-provisioner [00aed308344f086574af655c9996a7b641715d301430dc08c96ff996ef60c175] <==
	I1101 12:03:36.343752       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 12:04:06.418416       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ccb3e9649abb4d3db8b3d243402c03bb237c2ba79fff3fbf00f84ea8b516b9ab] <==
	I1101 12:04:06.536492       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 12:04:06.551346       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 12:04:06.551516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 12:04:06.554123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:10.025553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:14.286219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:17.886638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:20.940325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:23.962282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:23.967945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:04:23.968085       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 12:04:23.968262       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772362_83cfce6f-9162-4ade-9202-0f7bca23094b!
	I1101 12:04:23.969296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f733451d-a420-4621-bd46-168ecef6ff2e", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-772362_83cfce6f-9162-4ade-9202-0f7bca23094b became leader
	W1101 12:04:23.974931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:23.980021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:04:24.069441       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772362_83cfce6f-9162-4ade-9202-0f7bca23094b!
	W1101 12:04:25.983424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:25.990550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:27.995018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:28.003482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:30.054248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:30.101254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362: exit status 2 (400.54644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-772362 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-772362
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-772362:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0",
	        "Created": "2025-11-01T12:01:37.247472685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 746250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T12:03:20.866334529Z",
	            "FinishedAt": "2025-11-01T12:03:19.832824414Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/hosts",
	        "LogPath": "/var/lib/docker/containers/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0-json.log",
	        "Name": "/default-k8s-diff-port-772362",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-772362:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-772362",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0",
	                "LowerDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902-init/diff:/var/lib/docker/overlay2/21d61574f17b4b99b161ba06788eed27ff2ed4cd88f8f323107c5ef7407644f1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21cdf12652fec796beeb5b3ab406e6343b4c0818be9e22cb01c17724709c2902/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-772362",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-772362/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-772362",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-772362",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-772362",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1201ae80f644d8b3d59f6381f56e651287d51dd406cdfb1677e35b50426fff7",
	            "SandboxKey": "/var/run/docker/netns/f1201ae80f64",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-772362": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:8a:64:8f:f0:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73eb4efd47c2bd595401a91b3c40a866a38f38c55c2d40593383e02853a1364a",
	                    "EndpointID": "1096f4fc37f42efaf5e73f105e92d1130d1e99e1c26e46598235ee1593434e20",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-772362",
	                        "087d99a3919f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362: exit status 2 (357.175597ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-772362 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-772362 logs -n 25: (1.383266775s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-198717 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │                     │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p no-preload-198717                                                                                                                                                                                                                          │ no-preload-198717            │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ delete  │ -p disable-driver-mounts-783522                                                                                                                                                                                                               │ disable-driver-mounts-783522 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:01 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:01 UTC │ 01 Nov 25 12:02 UTC │
	│ image   │ embed-certs-816860 image list --format=json                                                                                                                                                                                                   │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ pause   │ -p embed-certs-816860 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ delete  │ -p embed-certs-816860                                                                                                                                                                                                                         │ embed-certs-816860           │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable metrics-server -p newest-cni-915456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │                     │
	│ stop    │ -p newest-cni-915456 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-915456 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:02 UTC │
	│ start   │ -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:02 UTC │ 01 Nov 25 12:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-772362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-772362 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ image   │ newest-cni-915456 image list --format=json                                                                                                                                                                                                    │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ pause   │ -p newest-cni-915456 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ delete  │ -p newest-cni-915456                                                                                                                                                                                                                          │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-772362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ start   │ -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:04 UTC │
	│ delete  │ -p newest-cni-915456                                                                                                                                                                                                                          │ newest-cni-915456            │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │ 01 Nov 25 12:03 UTC │
	│ start   │ -p auto-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-507511                  │ jenkins │ v1.37.0 │ 01 Nov 25 12:03 UTC │                     │
	│ image   │ default-k8s-diff-port-772362 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:04 UTC │ 01 Nov 25 12:04 UTC │
	│ pause   │ -p default-k8s-diff-port-772362 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-772362 │ jenkins │ v1.37.0 │ 01 Nov 25 12:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 12:03:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 12:03:21.999842  746742 out.go:360] Setting OutFile to fd 1 ...
	I1101 12:03:22.000049  746742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:03:22.000076  746742 out.go:374] Setting ErrFile to fd 2...
	I1101 12:03:22.000101  746742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 12:03:22.000378  746742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 12:03:22.000845  746742 out.go:368] Setting JSON to false
	I1101 12:03:22.001803  746742 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13551,"bootTime":1761985051,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 12:03:22.007720  746742 start.go:143] virtualization:  
	I1101 12:03:22.011582  746742 out.go:179] * [auto-507511] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 12:03:22.015742  746742 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 12:03:22.015873  746742 notify.go:221] Checking for updates...
	I1101 12:03:22.022113  746742 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 12:03:22.025115  746742 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:22.028255  746742 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 12:03:22.031490  746742 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 12:03:22.034440  746742 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 12:03:22.038049  746742 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:22.038159  746742 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 12:03:22.070788  746742 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 12:03:22.070915  746742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:03:22.138895  746742 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-01 12:03:22.124625527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:03:22.138998  746742 docker.go:319] overlay module found
	I1101 12:03:22.142132  746742 out.go:179] * Using the docker driver based on user configuration
	I1101 12:03:22.145064  746742 start.go:309] selected driver: docker
	I1101 12:03:22.145090  746742 start.go:930] validating driver "docker" against <nil>
	I1101 12:03:22.145104  746742 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 12:03:22.145909  746742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 12:03:22.200874  746742 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-01 12:03:22.191435811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 12:03:22.201028  746742 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 12:03:22.201281  746742 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:03:22.204291  746742 out.go:179] * Using Docker driver with root privileges
	I1101 12:03:22.207146  746742 cni.go:84] Creating CNI manager for ""
	I1101 12:03:22.207227  746742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:03:22.207242  746742 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 12:03:22.207337  746742 start.go:353] cluster config:
	{Name:auto-507511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1101 12:03:22.210488  746742 out.go:179] * Starting "auto-507511" primary control-plane node in "auto-507511" cluster
	I1101 12:03:22.213318  746742 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 12:03:22.216503  746742 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 12:03:22.219439  746742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:22.219520  746742 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 12:03:22.219524  746742 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 12:03:22.219535  746742 cache.go:59] Caching tarball of preloaded images
	I1101 12:03:22.219635  746742 preload.go:233] Found /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 12:03:22.219646  746742 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 12:03:22.219773  746742 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/config.json ...
	I1101 12:03:22.219802  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/config.json: {Name:mkec428b9955b09281a48807c19dca6bbb8cf781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:22.239226  746742 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 12:03:22.239252  746742 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 12:03:22.239271  746742 cache.go:233] Successfully downloaded all kic artifacts
	I1101 12:03:22.239296  746742 start.go:360] acquireMachinesLock for auto-507511: {Name:mkd1ed91bd009dfe0cb30a20b07d722c9cbc0c63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 12:03:22.239407  746742 start.go:364] duration metric: took 91.234µs to acquireMachinesLock for "auto-507511"
	I1101 12:03:22.239439  746742 start.go:93] Provisioning new machine with config: &{Name:auto-507511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:03:22.239527  746742 start.go:125] createHost starting for "" (driver="docker")
	I1101 12:03:20.833772  746003 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-772362" ...
	I1101 12:03:20.833854  746003 cli_runner.go:164] Run: docker start default-k8s-diff-port-772362
	I1101 12:03:21.163800  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:21.184042  746003 kic.go:430] container "default-k8s-diff-port-772362" state is running.
	I1101 12:03:21.184407  746003 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:03:21.212152  746003 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/config.json ...
	I1101 12:03:21.213350  746003 machine.go:94] provisionDockerMachine start ...
	I1101 12:03:21.213443  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:21.238857  746003 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:21.239182  746003 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33820 <nil> <nil>}
	I1101 12:03:21.239192  746003 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:03:21.240326  746003 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 12:03:24.397221  746003 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772362
	
	I1101 12:03:24.397296  746003 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-772362"
	I1101 12:03:24.397403  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:24.416465  746003 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:24.416764  746003 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33820 <nil> <nil>}
	I1101 12:03:24.416776  746003 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-772362 && echo "default-k8s-diff-port-772362" | sudo tee /etc/hostname
	I1101 12:03:24.580348  746003 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-772362
	
	I1101 12:03:24.580520  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:24.603764  746003 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:24.604076  746003 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33820 <nil> <nil>}
	I1101 12:03:24.604093  746003 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-772362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-772362/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-772362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:03:24.766090  746003 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:03:24.766119  746003 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:03:24.766155  746003 ubuntu.go:190] setting up certificates
	I1101 12:03:24.766173  746003 provision.go:84] configureAuth start
	I1101 12:03:24.766238  746003 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:03:24.787111  746003 provision.go:143] copyHostCerts
	I1101 12:03:24.787186  746003 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:03:24.787207  746003 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:03:24.787282  746003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:03:24.787375  746003 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:03:24.787386  746003 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:03:24.787415  746003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:03:24.787480  746003 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:03:24.787490  746003 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:03:24.787520  746003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:03:24.787570  746003 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-772362 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-772362 localhost minikube]
	I1101 12:03:22.242985  746742 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 12:03:22.243242  746742 start.go:159] libmachine.API.Create for "auto-507511" (driver="docker")
	I1101 12:03:22.243288  746742 client.go:173] LocalClient.Create starting
	I1101 12:03:22.243369  746742 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem
	I1101 12:03:22.243408  746742 main.go:143] libmachine: Decoding PEM data...
	I1101 12:03:22.243429  746742 main.go:143] libmachine: Parsing certificate...
	I1101 12:03:22.243496  746742 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem
	I1101 12:03:22.243518  746742 main.go:143] libmachine: Decoding PEM data...
	I1101 12:03:22.243531  746742 main.go:143] libmachine: Parsing certificate...
	I1101 12:03:22.243925  746742 cli_runner.go:164] Run: docker network inspect auto-507511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 12:03:22.259928  746742 cli_runner.go:211] docker network inspect auto-507511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 12:03:22.260015  746742 network_create.go:284] running [docker network inspect auto-507511] to gather additional debugging logs...
	I1101 12:03:22.260036  746742 cli_runner.go:164] Run: docker network inspect auto-507511
	W1101 12:03:22.275824  746742 cli_runner.go:211] docker network inspect auto-507511 returned with exit code 1
	I1101 12:03:22.275857  746742 network_create.go:287] error running [docker network inspect auto-507511]: docker network inspect auto-507511: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-507511 not found
	I1101 12:03:22.275875  746742 network_create.go:289] output of [docker network inspect auto-507511]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-507511 not found
	
	** /stderr **
	I1101 12:03:22.275966  746742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:03:22.292361  746742 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fad877b9a6cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:a4:0d:8c:c4:a0} reservation:<nil>}
	I1101 12:03:22.292694  746742 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f319e39f8d0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:35:a5:64:2d:20} reservation:<nil>}
	I1101 12:03:22.293035  746742 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce7deea9bf12 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:09:be:7b:bb:7b} reservation:<nil>}
	I1101 12:03:22.293469  746742 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c89e0}
	I1101 12:03:22.293501  746742 network_create.go:124] attempt to create docker network auto-507511 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 12:03:22.293555  746742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-507511 auto-507511
	I1101 12:03:22.350678  746742 network_create.go:108] docker network auto-507511 192.168.76.0/24 created
	I1101 12:03:22.350708  746742 kic.go:121] calculated static IP "192.168.76.2" for the "auto-507511" container
	I1101 12:03:22.350794  746742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 12:03:22.366994  746742 cli_runner.go:164] Run: docker volume create auto-507511 --label name.minikube.sigs.k8s.io=auto-507511 --label created_by.minikube.sigs.k8s.io=true
	I1101 12:03:22.384910  746742 oci.go:103] Successfully created a docker volume auto-507511
	I1101 12:03:22.385000  746742 cli_runner.go:164] Run: docker run --rm --name auto-507511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-507511 --entrypoint /usr/bin/test -v auto-507511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 12:03:22.905406  746742 oci.go:107] Successfully prepared a docker volume auto-507511
	I1101 12:03:22.905462  746742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:22.905483  746742 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 12:03:22.905559  746742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-507511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 12:03:25.649686  746003 provision.go:177] copyRemoteCerts
	I1101 12:03:25.649813  746003 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:03:25.649893  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:25.668456  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:25.775225  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:03:25.798065  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 12:03:25.816522  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 12:03:25.837365  746003 provision.go:87] duration metric: took 1.071165158s to configureAuth
	I1101 12:03:25.837433  746003 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:03:25.837642  746003 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:25.837781  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:25.855533  746003 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:25.855876  746003 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33820 <nil> <nil>}
	I1101 12:03:25.855896  746003 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:03:26.329224  746003 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:03:26.329260  746003 machine.go:97] duration metric: took 5.115885408s to provisionDockerMachine
	I1101 12:03:26.329272  746003 start.go:293] postStartSetup for "default-k8s-diff-port-772362" (driver="docker")
	I1101 12:03:26.329302  746003 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:03:26.329403  746003 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:03:26.329478  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:26.351936  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:26.457490  746003 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:03:26.461100  746003 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:03:26.461132  746003 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:03:26.461146  746003 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:03:26.461201  746003 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:03:26.461290  746003 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:03:26.461393  746003 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:03:26.468825  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:26.486454  746003 start.go:296] duration metric: took 157.165741ms for postStartSetup
	I1101 12:03:26.486545  746003 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:03:26.486584  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:26.503490  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:26.607032  746003 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:03:26.611734  746003 fix.go:56] duration metric: took 5.798051625s for fixHost
	I1101 12:03:26.611806  746003 start.go:83] releasing machines lock for "default-k8s-diff-port-772362", held for 5.798156463s
	I1101 12:03:26.611903  746003 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-772362
	I1101 12:03:26.628286  746003 ssh_runner.go:195] Run: cat /version.json
	I1101 12:03:26.628342  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:26.628658  746003 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:03:26.628711  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:26.650476  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:26.650629  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:26.749369  746003 ssh_runner.go:195] Run: systemctl --version
	I1101 12:03:26.843112  746003 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:03:26.892577  746003 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:03:26.897357  746003 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:03:26.897478  746003 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:03:26.905377  746003 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 12:03:26.905400  746003 start.go:496] detecting cgroup driver to use...
	I1101 12:03:26.905432  746003 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:03:26.905508  746003 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:03:26.920855  746003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:03:26.936547  746003 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:03:26.936615  746003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:03:26.953008  746003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:03:26.966139  746003 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:03:27.082119  746003 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:03:27.198275  746003 docker.go:234] disabling docker service ...
	I1101 12:03:27.198354  746003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:03:27.213535  746003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:03:27.228455  746003 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:03:27.363734  746003 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:03:27.519790  746003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:03:27.538616  746003 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:03:27.554334  746003 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:03:27.554405  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.564413  746003 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:03:27.564479  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.576981  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.591160  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.610009  746003 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:03:27.619027  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.633764  746003 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.648181  746003 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:27.658467  746003 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:03:27.667791  746003 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:03:27.685671  746003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:27.858008  746003 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:03:28.050352  746003 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:03:28.050431  746003 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:03:28.063083  746003 start.go:564] Will wait 60s for crictl version
	I1101 12:03:28.063141  746003 ssh_runner.go:195] Run: which crictl
	I1101 12:03:28.073712  746003 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:03:28.158893  746003 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:03:28.158974  746003 ssh_runner.go:195] Run: crio --version
	I1101 12:03:28.234757  746003 ssh_runner.go:195] Run: crio --version
	I1101 12:03:28.291235  746003 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:03:28.295116  746003 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-772362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:03:28.315140  746003 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 12:03:28.319985  746003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:28.333616  746003 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:03:28.333850  746003 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:28.333924  746003 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:28.381873  746003 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:28.381901  746003 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:03:28.381956  746003 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:28.415125  746003 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:28.415154  746003 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:03:28.415162  746003 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1101 12:03:28.415264  746003 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-772362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:03:28.415346  746003 ssh_runner.go:195] Run: crio config
	I1101 12:03:28.527753  746003 cni.go:84] Creating CNI manager for ""
	I1101 12:03:28.527787  746003 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:03:28.527808  746003 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 12:03:28.527831  746003 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-772362 NodeName:default-k8s-diff-port-772362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:03:28.527986  746003 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-772362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:03:28.528070  746003 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:03:28.539852  746003 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:03:28.539932  746003 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:03:28.550581  746003 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 12:03:28.571059  746003 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:03:28.593765  746003 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 12:03:28.614466  746003 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:03:28.619572  746003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:28.632265  746003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:28.856299  746003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:28.885412  746003 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362 for IP: 192.168.85.2
	I1101 12:03:28.885436  746003 certs.go:195] generating shared ca certs ...
	I1101 12:03:28.885453  746003 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:28.885618  746003 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:03:28.885670  746003 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:03:28.885682  746003 certs.go:257] generating profile certs ...
	I1101 12:03:28.885816  746003 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.key
	I1101 12:03:28.885897  746003 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key.c6086429
	I1101 12:03:28.885944  746003 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key
	I1101 12:03:28.886085  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:03:28.886135  746003 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:03:28.886149  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:03:28.886183  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:03:28.886214  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:03:28.886240  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:03:28.886302  746003 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:28.886968  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:03:28.913875  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:03:28.960218  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:03:28.991347  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:03:29.028732  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 12:03:29.059862  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 12:03:29.105614  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:03:29.148836  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 12:03:29.197749  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:03:29.225007  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:03:29.249282  746003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:03:29.273682  746003 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:03:29.291334  746003 ssh_runner.go:195] Run: openssl version
	I1101 12:03:29.300065  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:03:29.312423  746003 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:03:29.320183  746003 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:03:29.320252  746003 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:03:29.371110  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:03:29.381005  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:03:29.391032  746003 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:29.394931  746003 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:29.394999  746003 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:29.442049  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:03:29.450442  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:03:29.458925  746003 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:03:29.463043  746003 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:03:29.463148  746003 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:03:29.505801  746003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:03:29.514239  746003 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:03:29.518353  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 12:03:29.559589  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 12:03:29.601978  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 12:03:29.645474  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 12:03:29.696917  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 12:03:29.762245  746003 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 12:03:29.854106  746003 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-772362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-772362 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:03:29.854278  746003 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:03:29.854380  746003 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:03:29.903390  746003 cri.go:89] found id: "81b640d642c4a033a2066adee4e3f0b09cae8a8df5d4558591aa4e5f194359cf"
	I1101 12:03:29.903413  746003 cri.go:89] found id: "f96bb403d6b6c123105828e3f84d5ebf20a34529af731f64c66cb9c0669a5093"
	I1101 12:03:29.903428  746003 cri.go:89] found id: "302efc83dc595d0d69aa551f9cc9f21aea9f5603913f8c8a601f65423c799822"
	I1101 12:03:29.903432  746003 cri.go:89] found id: "53604a992cb8b97edf6f8b57e315089f1b817fa526ca575f87c8d55f22389249"
	I1101 12:03:29.903441  746003 cri.go:89] found id: ""
	I1101 12:03:29.903494  746003 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 12:03:29.924868  746003 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T12:03:29Z" level=error msg="open /run/runc: no such file or directory"
	I1101 12:03:29.925010  746003 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:03:29.948225  746003 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 12:03:29.948287  746003 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 12:03:29.948386  746003 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 12:03:29.963667  746003 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 12:03:29.964162  746003 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-772362" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:29.964328  746003 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-532863/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-772362" cluster setting kubeconfig missing "default-k8s-diff-port-772362" context setting]
	I1101 12:03:29.964761  746003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:29.966652  746003 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 12:03:29.978385  746003 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 12:03:29.978483  746003 kubeadm.go:602] duration metric: took 30.152734ms to restartPrimaryControlPlane
	I1101 12:03:29.978511  746003 kubeadm.go:403] duration metric: took 124.415918ms to StartCluster
	I1101 12:03:29.978540  746003 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:29.978648  746003 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:03:29.979403  746003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:29.979682  746003 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:03:29.979997  746003 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:03:29.980071  746003 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-772362"
	I1101 12:03:29.980085  746003 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-772362"
	W1101 12:03:29.980090  746003 addons.go:248] addon storage-provisioner should already be in state true
	I1101 12:03:29.980114  746003 host.go:66] Checking if "default-k8s-diff-port-772362" exists ...
	I1101 12:03:29.980557  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:29.981236  746003 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-772362"
	I1101 12:03:29.981280  746003 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-772362"
	W1101 12:03:29.981317  746003 addons.go:248] addon dashboard should already be in state true
	I1101 12:03:29.981369  746003 host.go:66] Checking if "default-k8s-diff-port-772362" exists ...
	I1101 12:03:29.982015  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:29.982209  746003 config.go:182] Loaded profile config "default-k8s-diff-port-772362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:29.982314  746003 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-772362"
	I1101 12:03:29.982347  746003 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-772362"
	I1101 12:03:29.982649  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:29.996498  746003 out.go:179] * Verifying Kubernetes components...
	I1101 12:03:30.003843  746003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:30.077352  746003 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 12:03:30.081032  746003 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 12:03:30.083976  746003 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-772362"
	W1101 12:03:30.084005  746003 addons.go:248] addon default-storageclass should already be in state true
	I1101 12:03:30.084033  746003 host.go:66] Checking if "default-k8s-diff-port-772362" exists ...
	I1101 12:03:30.084299  746003 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:30.084316  746003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:03:30.084385  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:30.084941  746003 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-772362 --format={{.State.Status}}
	I1101 12:03:30.085185  746003 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 12:03:30.090909  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 12:03:30.090948  746003 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 12:03:30.091033  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:30.137971  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:30.144443  746003 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:30.144467  746003 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:03:30.144550  746003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-772362
	I1101 12:03:30.173770  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:30.183901  746003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33820 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/default-k8s-diff-port-772362/id_rsa Username:docker}
	I1101 12:03:30.364022  746003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:30.387453  746003 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-772362" to be "Ready" ...
	I1101 12:03:30.422081  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 12:03:30.422107  746003 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 12:03:30.426490  746003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:03:30.432257  746003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:03:27.425343  746742 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-507511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.519746198s)
	I1101 12:03:27.425374  746742 kic.go:203] duration metric: took 4.519887353s to extract preloaded images to volume ...
	W1101 12:03:27.425519  746742 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 12:03:27.425640  746742 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 12:03:27.532467  746742 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-507511 --name auto-507511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-507511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-507511 --network auto-507511 --ip 192.168.76.2 --volume auto-507511:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 12:03:27.896869  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Running}}
	I1101 12:03:27.927771  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:03:27.960296  746742 cli_runner.go:164] Run: docker exec auto-507511 stat /var/lib/dpkg/alternatives/iptables
	I1101 12:03:28.024438  746742 oci.go:144] the created container "auto-507511" has a running status.
	I1101 12:03:28.024465  746742 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa...
	I1101 12:03:28.061203  746742 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 12:03:28.090777  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:03:28.114759  746742 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 12:03:28.114888  746742 kic_runner.go:114] Args: [docker exec --privileged auto-507511 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 12:03:28.174501  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:03:28.205683  746742 machine.go:94] provisionDockerMachine start ...
	I1101 12:03:28.205784  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:28.231212  746742 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:28.231550  746742 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33825 <nil> <nil>}
	I1101 12:03:28.231567  746742 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 12:03:28.232258  746742 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58626->127.0.0.1:33825: read: connection reset by peer
	I1101 12:03:31.413408  746742 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-507511
	
	I1101 12:03:31.413483  746742 ubuntu.go:182] provisioning hostname "auto-507511"
	I1101 12:03:31.413576  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:31.448109  746742 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:31.448426  746742 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33825 <nil> <nil>}
	I1101 12:03:31.448436  746742 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-507511 && echo "auto-507511" | sudo tee /etc/hostname
	I1101 12:03:31.640645  746742 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-507511
	
	I1101 12:03:31.640717  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:31.664002  746742 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:31.664322  746742 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33825 <nil> <nil>}
	I1101 12:03:31.664345  746742 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-507511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-507511/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-507511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 12:03:31.860002  746742 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 12:03:31.860079  746742 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-532863/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-532863/.minikube}
	I1101 12:03:31.860145  746742 ubuntu.go:190] setting up certificates
	I1101 12:03:31.860193  746742 provision.go:84] configureAuth start
	I1101 12:03:31.860282  746742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-507511
	I1101 12:03:31.887253  746742 provision.go:143] copyHostCerts
	I1101 12:03:31.887331  746742 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem, removing ...
	I1101 12:03:31.887347  746742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem
	I1101 12:03:31.887424  746742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/ca.pem (1078 bytes)
	I1101 12:03:31.887517  746742 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem, removing ...
	I1101 12:03:31.887529  746742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem
	I1101 12:03:31.887557  746742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/cert.pem (1123 bytes)
	I1101 12:03:31.887621  746742 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem, removing ...
	I1101 12:03:31.887631  746742 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem
	I1101 12:03:31.887657  746742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-532863/.minikube/key.pem (1675 bytes)
	I1101 12:03:31.887711  746742 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem org=jenkins.auto-507511 san=[127.0.0.1 192.168.76.2 auto-507511 localhost minikube]
	I1101 12:03:30.463402  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 12:03:30.463426  746003 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 12:03:30.562473  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 12:03:30.562495  746003 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 12:03:30.639094  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 12:03:30.639117  746003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 12:03:30.691015  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 12:03:30.691037  746003 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 12:03:30.717247  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 12:03:30.717269  746003 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 12:03:30.739585  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 12:03:30.739663  746003 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 12:03:30.766510  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 12:03:30.766537  746003 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 12:03:30.791559  746003 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:03:30.791581  746003 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 12:03:30.820144  746003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 12:03:35.215847  746003 node_ready.go:49] node "default-k8s-diff-port-772362" is "Ready"
	I1101 12:03:35.215877  746003 node_ready.go:38] duration metric: took 4.828391439s for node "default-k8s-diff-port-772362" to be "Ready" ...
	I1101 12:03:35.215892  746003 api_server.go:52] waiting for apiserver process to appear ...
	I1101 12:03:35.215946  746003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 12:03:32.637175  746742 provision.go:177] copyRemoteCerts
	I1101 12:03:32.637252  746742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 12:03:32.637298  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:32.654947  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:32.770970  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 12:03:32.799530  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 12:03:32.828512  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 12:03:32.860816  746742 provision.go:87] duration metric: took 1.000586236s to configureAuth
	I1101 12:03:32.860845  746742 ubuntu.go:206] setting minikube options for container-runtime
	I1101 12:03:32.861030  746742 config.go:182] Loaded profile config "auto-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:03:32.861159  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:32.886890  746742 main.go:143] libmachine: Using SSH client type: native
	I1101 12:03:32.887209  746742 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33825 <nil> <nil>}
	I1101 12:03:32.887236  746742 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 12:03:33.257134  746742 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 12:03:33.257154  746742 machine.go:97] duration metric: took 5.051437094s to provisionDockerMachine
	I1101 12:03:33.257164  746742 client.go:176] duration metric: took 11.013864269s to LocalClient.Create
	I1101 12:03:33.257185  746742 start.go:167] duration metric: took 11.013944779s to libmachine.API.Create "auto-507511"
	I1101 12:03:33.257193  746742 start.go:293] postStartSetup for "auto-507511" (driver="docker")
	I1101 12:03:33.257216  746742 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 12:03:33.257282  746742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 12:03:33.257335  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:33.286798  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:33.402180  746742 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 12:03:33.406201  746742 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 12:03:33.406233  746742 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 12:03:33.406245  746742 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/addons for local assets ...
	I1101 12:03:33.406299  746742 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-532863/.minikube/files for local assets ...
	I1101 12:03:33.406393  746742 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem -> 5347202.pem in /etc/ssl/certs
	I1101 12:03:33.406506  746742 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 12:03:33.422950  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:33.461042  746742 start.go:296] duration metric: took 203.81518ms for postStartSetup
	I1101 12:03:33.461403  746742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-507511
	I1101 12:03:33.489873  746742 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/config.json ...
	I1101 12:03:33.490162  746742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 12:03:33.490212  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:33.519938  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:33.642252  746742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 12:03:33.647375  746742 start.go:128] duration metric: took 11.407833277s to createHost
	I1101 12:03:33.647402  746742 start.go:83] releasing machines lock for "auto-507511", held for 11.407979683s
	I1101 12:03:33.647472  746742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-507511
	I1101 12:03:33.689218  746742 ssh_runner.go:195] Run: cat /version.json
	I1101 12:03:33.689275  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:33.689524  746742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 12:03:33.689573  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:03:33.721679  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:33.729246  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:03:33.846422  746742 ssh_runner.go:195] Run: systemctl --version
	I1101 12:03:33.940645  746742 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 12:03:34.012304  746742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 12:03:34.026408  746742 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 12:03:34.026558  746742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 12:03:34.071581  746742 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 12:03:34.071657  746742 start.go:496] detecting cgroup driver to use...
	I1101 12:03:34.071738  746742 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 12:03:34.071833  746742 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 12:03:34.100138  746742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 12:03:34.119565  746742 docker.go:218] disabling cri-docker service (if available) ...
	I1101 12:03:34.119678  746742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 12:03:34.141647  746742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 12:03:34.166530  746742 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 12:03:34.353783  746742 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 12:03:34.552746  746742 docker.go:234] disabling docker service ...
	I1101 12:03:34.552863  746742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 12:03:34.597193  746742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 12:03:34.614615  746742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 12:03:34.829468  746742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 12:03:35.038089  746742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 12:03:35.060073  746742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 12:03:35.083598  746742 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 12:03:35.083762  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.103058  746742 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 12:03:35.103193  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.125884  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.142444  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.158341  746742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 12:03:35.171774  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.184753  746742 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.204523  746742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 12:03:35.213313  746742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 12:03:35.224720  746742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 12:03:35.232775  746742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:35.475400  746742 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 12:03:35.673349  746742 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 12:03:35.673480  746742 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 12:03:35.681964  746742 start.go:564] Will wait 60s for crictl version
	I1101 12:03:35.682127  746742 ssh_runner.go:195] Run: which crictl
	I1101 12:03:35.686401  746742 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 12:03:35.734373  746742 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 12:03:35.734518  746742 ssh_runner.go:195] Run: crio --version
	I1101 12:03:35.793047  746742 ssh_runner.go:195] Run: crio --version
	I1101 12:03:35.846641  746742 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 12:03:35.849743  746742 cli_runner.go:164] Run: docker network inspect auto-507511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 12:03:35.875396  746742 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 12:03:35.879510  746742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:35.894939  746742 kubeadm.go:884] updating cluster {Name:auto-507511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 12:03:35.895051  746742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 12:03:35.895114  746742 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:35.986961  746742 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:35.986993  746742 crio.go:433] Images already preloaded, skipping extraction
	I1101 12:03:35.987056  746742 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 12:03:36.034472  746742 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 12:03:36.034493  746742 cache_images.go:86] Images are preloaded, skipping loading
	I1101 12:03:36.034501  746742 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 12:03:36.034605  746742 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-507511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 12:03:36.034701  746742 ssh_runner.go:195] Run: crio config
	I1101 12:03:36.114664  746742 cni.go:84] Creating CNI manager for ""
	I1101 12:03:36.114735  746742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:03:36.114766  746742 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 12:03:36.114820  746742 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-507511 NodeName:auto-507511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 12:03:36.115007  746742 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-507511"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 12:03:36.115128  746742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 12:03:36.126958  746742 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 12:03:36.127084  746742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 12:03:36.138336  746742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1101 12:03:36.160169  746742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 12:03:36.178653  746742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 12:03:36.198734  746742 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 12:03:36.206185  746742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 12:03:36.220024  746742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:03:36.442199  746742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:03:36.468984  746742 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511 for IP: 192.168.76.2
	I1101 12:03:36.469054  746742 certs.go:195] generating shared ca certs ...
	I1101 12:03:36.469095  746742 certs.go:227] acquiring lock for ca certs: {Name:mkf1eb1b0a157a52860366e1243b59ec23d70467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:36.469320  746742 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key
	I1101 12:03:36.469399  746742 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key
	I1101 12:03:36.469436  746742 certs.go:257] generating profile certs ...
	I1101 12:03:36.469537  746742 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.key
	I1101 12:03:36.469571  746742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt with IP's: []
	I1101 12:03:36.808895  746742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt ...
	I1101 12:03:36.808969  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: {Name:mke85943a1ddcd8947f7a6c6f17da07a2243466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:36.809230  746742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.key ...
	I1101 12:03:36.809269  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.key: {Name:mk40b2401486d39fbe7705acf3aeecd9bba8c5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:36.809426  746742 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key.375f5936
	I1101 12:03:36.809465  746742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt.375f5936 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 12:03:37.804168  746003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.377645262s)
	I1101 12:03:37.804216  746003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.371928331s)
	I1101 12:03:37.804299  746003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.984056338s)
	I1101 12:03:37.804487  746003 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.58852359s)
	I1101 12:03:37.804505  746003 api_server.go:72] duration metric: took 7.824762681s to wait for apiserver process to appear ...
	I1101 12:03:37.804511  746003 api_server.go:88] waiting for apiserver healthz status ...
	I1101 12:03:37.804527  746003 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 12:03:37.807530  746003 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-772362 addons enable metrics-server
	
	I1101 12:03:37.817875  746003 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 12:03:37.708298  746742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt.375f5936 ...
	I1101 12:03:37.708381  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt.375f5936: {Name:mk2083f04eb2884da13e99c217ade00c345e9b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:37.708607  746742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key.375f5936 ...
	I1101 12:03:37.708646  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key.375f5936: {Name:mk211898e90a6e8188a0f27882c9bf7d432072a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:37.708789  746742 certs.go:382] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt.375f5936 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt
	I1101 12:03:37.708912  746742 certs.go:386] copying /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key.375f5936 -> /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key
	I1101 12:03:37.709017  746742 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.key
	I1101 12:03:37.709052  746742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.crt with IP's: []
	I1101 12:03:38.032606  746742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.crt ...
	I1101 12:03:38.032683  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.crt: {Name:mk047b93bd3ece489cb88d3e9645f31fb0582f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:38.032911  746742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.key ...
	I1101 12:03:38.032945  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.key: {Name:mk904b45d74a72ecb710b0507fa4c090f5a19ab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:03:38.033206  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem (1338 bytes)
	W1101 12:03:38.033276  746742 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720_empty.pem, impossibly tiny 0 bytes
	I1101 12:03:38.033317  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 12:03:38.033366  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/ca.pem (1078 bytes)
	I1101 12:03:38.033421  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/cert.pem (1123 bytes)
	I1101 12:03:38.033472  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/certs/key.pem (1675 bytes)
	I1101 12:03:38.033555  746742 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem (1708 bytes)
	I1101 12:03:38.034626  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 12:03:38.058949  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 12:03:38.086804  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 12:03:38.109897  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 12:03:38.139228  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 12:03:38.178580  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 12:03:38.221549  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 12:03:38.258367  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 12:03:38.287123  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/certs/534720.pem --> /usr/share/ca-certificates/534720.pem (1338 bytes)
	I1101 12:03:38.308066  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/ssl/certs/5347202.pem --> /usr/share/ca-certificates/5347202.pem (1708 bytes)
	I1101 12:03:38.330896  746742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-532863/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 12:03:38.352081  746742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 12:03:38.366860  746742 ssh_runner.go:195] Run: openssl version
	I1101 12:03:38.373325  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534720.pem && ln -fs /usr/share/ca-certificates/534720.pem /etc/ssl/certs/534720.pem"
	I1101 12:03:38.383487  746742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534720.pem
	I1101 12:03:38.387330  746742 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:55 /usr/share/ca-certificates/534720.pem
	I1101 12:03:38.387396  746742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534720.pem
	I1101 12:03:38.428805  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534720.pem /etc/ssl/certs/51391683.0"
	I1101 12:03:38.437517  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5347202.pem && ln -fs /usr/share/ca-certificates/5347202.pem /etc/ssl/certs/5347202.pem"
	I1101 12:03:38.446716  746742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5347202.pem
	I1101 12:03:38.450450  746742 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:55 /usr/share/ca-certificates/5347202.pem
	I1101 12:03:38.450557  746742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5347202.pem
	I1101 12:03:38.498332  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5347202.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 12:03:38.507191  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 12:03:38.517167  746742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:38.521199  746742 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:38.521294  746742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 12:03:38.562811  746742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 12:03:38.571574  746742 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 12:03:38.575121  746742 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 12:03:38.575220  746742 kubeadm.go:401] StartCluster: {Name:auto-507511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-507511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 12:03:38.575300  746742 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 12:03:38.575369  746742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 12:03:38.603248  746742 cri.go:89] found id: ""
	I1101 12:03:38.603329  746742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 12:03:38.611748  746742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 12:03:38.620481  746742 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 12:03:38.620545  746742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 12:03:38.628592  746742 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 12:03:38.628611  746742 kubeadm.go:158] found existing configuration files:
	
	I1101 12:03:38.628664  746742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 12:03:38.636484  746742 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 12:03:38.636581  746742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 12:03:38.644255  746742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 12:03:38.652292  746742 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 12:03:38.652392  746742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 12:03:38.660035  746742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 12:03:38.668901  746742 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 12:03:38.669018  746742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 12:03:38.676961  746742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 12:03:38.684930  746742 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 12:03:38.684994  746742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 12:03:38.693180  746742 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 12:03:38.737101  746742 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 12:03:38.737265  746742 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 12:03:38.766838  746742 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 12:03:38.766921  746742 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 12:03:38.766966  746742 kubeadm.go:319] OS: Linux
	I1101 12:03:38.767021  746742 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 12:03:38.767076  746742 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 12:03:38.767155  746742 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 12:03:38.767248  746742 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 12:03:38.767377  746742 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 12:03:38.767456  746742 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 12:03:38.767510  746742 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 12:03:38.767569  746742 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 12:03:38.767630  746742 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 12:03:38.843938  746742 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 12:03:38.844556  746742 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 12:03:38.844776  746742 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 12:03:38.854765  746742 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 12:03:37.818501  746003 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 12:03:37.819682  746003 api_server.go:141] control plane version: v1.34.1
	I1101 12:03:37.819705  746003 api_server.go:131] duration metric: took 15.188389ms to wait for apiserver health ...
	I1101 12:03:37.819715  746003 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 12:03:37.821037  746003 addons.go:515] duration metric: took 7.841040635s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 12:03:37.824402  746003 system_pods.go:59] 8 kube-system pods found
	I1101 12:03:37.824441  746003 system_pods.go:61] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:03:37.824455  746003 system_pods.go:61] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:03:37.824461  746003 system_pods.go:61] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:03:37.824472  746003 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:03:37.824481  746003 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:03:37.824490  746003 system_pods.go:61] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:03:37.824496  746003 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:03:37.824501  746003 system_pods.go:61] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Running
	I1101 12:03:37.824512  746003 system_pods.go:74] duration metric: took 4.787754ms to wait for pod list to return data ...
	I1101 12:03:37.824520  746003 default_sa.go:34] waiting for default service account to be created ...
	I1101 12:03:37.827079  746003 default_sa.go:45] found service account: "default"
	I1101 12:03:37.827101  746003 default_sa.go:55] duration metric: took 2.575213ms for default service account to be created ...
	I1101 12:03:37.827110  746003 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 12:03:37.830139  746003 system_pods.go:86] 8 kube-system pods found
	I1101 12:03:37.830170  746003 system_pods.go:89] "coredns-66bc5c9577-czvv4" [0b8370f6-202f-4b70-a478-0186533d331b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 12:03:37.830182  746003 system_pods.go:89] "etcd-default-k8s-diff-port-772362" [875d07a1-a505-4866-8651-c460c2a0be74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 12:03:37.830191  746003 system_pods.go:89] "kindnet-88g26" [6e30bed5-15e4-4798-96a1-a7baf8f34f3c] Running
	I1101 12:03:37.830202  746003 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-772362" [350bae2a-9a58-4749-ae71-aec28f0bd6a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 12:03:37.830213  746003 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-772362" [8a8dc212-0685-4fad-9e7b-04659f64e836] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 12:03:37.830219  746003 system_pods.go:89] "kube-proxy-7bbw7" [3f1bbaf5-14a6-4155-898c-a9df5340bafc] Running
	I1101 12:03:37.830231  746003 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-772362" [eb70f522-9b84-4860-b1f7-ff06750161f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 12:03:37.830238  746003 system_pods.go:89] "storage-provisioner" [8e5a477e-257d-4c98-82a6-4339be5e401e] Running
	I1101 12:03:37.830250  746003 system_pods.go:126] duration metric: took 3.132225ms to wait for k8s-apps to be running ...
	I1101 12:03:37.830263  746003 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 12:03:37.830317  746003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 12:03:37.874311  746003 system_svc.go:56] duration metric: took 44.037813ms WaitForService to wait for kubelet
	I1101 12:03:37.874344  746003 kubeadm.go:587] duration metric: took 7.894599768s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 12:03:37.874365  746003 node_conditions.go:102] verifying NodePressure condition ...
	I1101 12:03:37.878659  746003 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 12:03:37.878685  746003 node_conditions.go:123] node cpu capacity is 2
	I1101 12:03:37.878697  746003 node_conditions.go:105] duration metric: took 4.326529ms to run NodePressure ...
	I1101 12:03:37.878709  746003 start.go:242] waiting for startup goroutines ...
	I1101 12:03:37.878716  746003 start.go:247] waiting for cluster config update ...
	I1101 12:03:37.878727  746003 start.go:256] writing updated cluster config ...
	I1101 12:03:37.879112  746003 ssh_runner.go:195] Run: rm -f paused
	I1101 12:03:37.892055  746003 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:03:37.923370  746003 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 12:03:39.958357  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:03:38.860979  746742 out.go:252]   - Generating certificates and keys ...
	I1101 12:03:38.861159  746742 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 12:03:38.861266  746742 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 12:03:39.380687  746742 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 12:03:39.634604  746742 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 12:03:40.105493  746742 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 12:03:40.767667  746742 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 12:03:41.829645  746742 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 12:03:41.830046  746742 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-507511 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1101 12:03:42.435050  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:03:44.930246  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:03:42.512918  746742 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 12:03:42.513226  746742 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-507511 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 12:03:43.308352  746742 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 12:03:44.213966  746742 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 12:03:44.801543  746742 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 12:03:44.801933  746742 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 12:03:45.313841  746742 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 12:03:46.089681  746742 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 12:03:46.324376  746742 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 12:03:46.670361  746742 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 12:03:48.108342  746742 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 12:03:48.109281  746742 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 12:03:48.118425  746742 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 12:03:46.943176  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:03:49.430021  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:03:48.122006  746742 out.go:252]   - Booting up control plane ...
	I1101 12:03:48.122128  746742 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 12:03:48.122227  746742 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 12:03:48.123285  746742 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 12:03:48.147456  746742 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 12:03:48.147913  746742 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 12:03:48.159867  746742 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 12:03:48.160618  746742 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 12:03:48.160920  746742 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 12:03:48.346294  746742 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 12:03:48.346426  746742 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 12:03:50.346039  746742 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000908685s
	I1101 12:03:50.347351  746742 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 12:03:50.347647  746742 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 12:03:50.347945  746742 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 12:03:50.348768  746742 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 12:03:51.434125  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:03:53.930065  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:03:55.492503  746742 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.143342891s
	I1101 12:03:57.008856  746742 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.65945115s
	I1101 12:03:58.850917  746742 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502540023s
	I1101 12:03:58.886434  746742 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 12:03:58.908026  746742 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 12:03:58.937000  746742 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 12:03:58.937254  746742 kubeadm.go:319] [mark-control-plane] Marking the node auto-507511 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 12:03:58.951671  746742 kubeadm.go:319] [bootstrap-token] Using token: grauow.5xc8kyq1ucth3q8o
	I1101 12:03:58.954604  746742 out.go:252]   - Configuring RBAC rules ...
	I1101 12:03:58.954748  746742 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 12:03:58.966536  746742 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 12:03:58.978232  746742 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 12:03:58.984935  746742 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 12:03:58.990008  746742 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 12:03:58.997254  746742 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 12:03:59.258016  746742 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 12:03:59.715270  746742 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 12:04:00.282756  746742 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 12:04:00.295126  746742 kubeadm.go:319] 
	I1101 12:04:00.295252  746742 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 12:04:00.295298  746742 kubeadm.go:319] 
	I1101 12:04:00.295438  746742 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 12:04:00.295452  746742 kubeadm.go:319] 
	I1101 12:04:00.295507  746742 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 12:04:00.295658  746742 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 12:04:00.295720  746742 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 12:04:00.295726  746742 kubeadm.go:319] 
	I1101 12:04:00.295784  746742 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 12:04:00.295788  746742 kubeadm.go:319] 
	I1101 12:04:00.295843  746742 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 12:04:00.295848  746742 kubeadm.go:319] 
	I1101 12:04:00.295904  746742 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 12:04:00.295984  746742 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 12:04:00.296064  746742 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 12:04:00.296069  746742 kubeadm.go:319] 
	I1101 12:04:00.296160  746742 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 12:04:00.296243  746742 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 12:04:00.296248  746742 kubeadm.go:319] 
	I1101 12:04:00.296337  746742 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token grauow.5xc8kyq1ucth3q8o \
	I1101 12:04:00.296449  746742 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 \
	I1101 12:04:00.296471  746742 kubeadm.go:319] 	--control-plane 
	I1101 12:04:00.296476  746742 kubeadm.go:319] 
	I1101 12:04:00.296567  746742 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 12:04:00.296572  746742 kubeadm.go:319] 
	I1101 12:04:00.296660  746742 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token grauow.5xc8kyq1ucth3q8o \
	I1101 12:04:00.296769  746742 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6154fe00c4b3b6d1ce4f3500ef815797b79de90371950bebbded24106e2601a8 
	I1101 12:04:00.318431  746742 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 12:04:00.318673  746742 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 12:04:00.318784  746742 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 12:04:00.318801  746742 cni.go:84] Creating CNI manager for ""
	I1101 12:04:00.318809  746742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 12:04:00.341343  746742 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 12:03:56.429343  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:03:58.930390  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:04:00.366934  746742 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 12:04:00.376767  746742 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 12:04:00.376791  746742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 12:04:00.429793  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 12:04:00.782580  746742 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 12:04:00.782717  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:00.782791  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-507511 minikube.k8s.io/updated_at=2025_11_01T12_04_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=auto-507511 minikube.k8s.io/primary=true
	I1101 12:04:01.005267  746742 ops.go:34] apiserver oom_adj: -16
	I1101 12:04:01.005397  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:01.505730  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:02.011190  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:02.506214  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:03.005882  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:03.505464  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:04.006756  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:04.505972  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:05.006715  746742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 12:04:05.144132  746742 kubeadm.go:1114] duration metric: took 4.361467488s to wait for elevateKubeSystemPrivileges
	I1101 12:04:05.144164  746742 kubeadm.go:403] duration metric: took 26.568948395s to StartCluster
	I1101 12:04:05.144181  746742 settings.go:142] acquiring lock: {Name:mkcec05b3b9abd727f12cc8fc6d8b8719f9d2893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:04:05.144244  746742 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 12:04:05.145218  746742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/kubeconfig: {Name:mk48b340ab8169449b11ec70cb4900037359d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 12:04:05.145431  746742 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 12:04:05.145547  746742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 12:04:05.145808  746742 config.go:182] Loaded profile config "auto-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 12:04:05.145848  746742 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 12:04:05.145915  746742 addons.go:70] Setting storage-provisioner=true in profile "auto-507511"
	I1101 12:04:05.145929  746742 addons.go:239] Setting addon storage-provisioner=true in "auto-507511"
	I1101 12:04:05.145958  746742 host.go:66] Checking if "auto-507511" exists ...
	I1101 12:04:05.146671  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:04:05.146860  746742 addons.go:70] Setting default-storageclass=true in profile "auto-507511"
	I1101 12:04:05.146884  746742 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-507511"
	I1101 12:04:05.147134  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:04:05.149040  746742 out.go:179] * Verifying Kubernetes components...
	I1101 12:04:05.155964  746742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 12:04:05.188798  746742 addons.go:239] Setting addon default-storageclass=true in "auto-507511"
	I1101 12:04:05.188843  746742 host.go:66] Checking if "auto-507511" exists ...
	I1101 12:04:05.189277  746742 cli_runner.go:164] Run: docker container inspect auto-507511 --format={{.State.Status}}
	I1101 12:04:05.205514  746742 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1101 12:04:00.931045  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:04:03.428767  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:04:05.439940  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:04:05.209006  746742 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:04:05.209027  746742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 12:04:05.209099  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:04:05.227786  746742 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 12:04:05.227808  746742 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 12:04:05.227878  746742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-507511
	I1101 12:04:05.254172  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:04:05.270053  746742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33825 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/auto-507511/id_rsa Username:docker}
	I1101 12:04:05.520744  746742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 12:04:05.524798  746742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 12:04:05.580633  746742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 12:04:05.657847  746742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 12:04:05.911440  746742 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 12:04:05.912343  746742 node_ready.go:35] waiting up to 15m0s for node "auto-507511" to be "Ready" ...
	I1101 12:04:06.343998  746742 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 12:04:06.346832  746742 addons.go:515] duration metric: took 1.200958109s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 12:04:06.416693  746742 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-507511" context rescaled to 1 replicas
	W1101 12:04:07.929801  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:04:10.428570  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	W1101 12:04:07.915290  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:10.415398  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:12.429083  746003 pod_ready.go:104] pod "coredns-66bc5c9577-czvv4" is not "Ready", error: <nil>
	I1101 12:04:14.429014  746003 pod_ready.go:94] pod "coredns-66bc5c9577-czvv4" is "Ready"
	I1101 12:04:14.429049  746003 pod_ready.go:86] duration metric: took 36.505651201s for pod "coredns-66bc5c9577-czvv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.432116  746003 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.437117  746003 pod_ready.go:94] pod "etcd-default-k8s-diff-port-772362" is "Ready"
	I1101 12:04:14.437146  746003 pod_ready.go:86] duration metric: took 5.001353ms for pod "etcd-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.439680  746003 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.445285  746003 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-772362" is "Ready"
	I1101 12:04:14.445316  746003 pod_ready.go:86] duration metric: took 5.604454ms for pod "kube-apiserver-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.448379  746003 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.627883  746003 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-772362" is "Ready"
	I1101 12:04:14.627911  746003 pod_ready.go:86] duration metric: took 179.501399ms for pod "kube-controller-manager-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:14.827275  746003 pod_ready.go:83] waiting for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:15.227163  746003 pod_ready.go:94] pod "kube-proxy-7bbw7" is "Ready"
	I1101 12:04:15.227241  746003 pod_ready.go:86] duration metric: took 399.941443ms for pod "kube-proxy-7bbw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:15.427978  746003 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:15.827500  746003 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-772362" is "Ready"
	I1101 12:04:15.827526  746003 pod_ready.go:86] duration metric: took 399.520317ms for pod "kube-scheduler-default-k8s-diff-port-772362" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 12:04:15.827545  746003 pod_ready.go:40] duration metric: took 37.935459647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 12:04:15.891669  746003 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 12:04:15.895553  746003 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-772362" cluster and "default" namespace by default
	W1101 12:04:12.915481  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:14.915637  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:17.416292  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:19.418186  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:21.918734  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:24.415968  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:26.416592  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:28.915422  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	W1101 12:04:30.915597  746742 node_ready.go:57] node "auto-507511" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.203924254Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f2502514-ff8a-4880-8ae9-bd952e958343 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.205224275Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=582c6520-0919-4534-b6e2-63b4d85acdde name=/runtime.v1.ImageService/ImageStatus
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.206361365Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm/dashboard-metrics-scraper" id=9421eb4b-f8b5-4a3a-a205-957108455c95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.206551783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.213605938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.214178458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.234039277Z" level=info msg="Created container 317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm/dashboard-metrics-scraper" id=9421eb4b-f8b5-4a3a-a205-957108455c95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.236762709Z" level=info msg="Starting container: 317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036" id=c4597824-fea4-4c2d-bd1d-16bcd531850f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.23918519Z" level=info msg="Started container" PID=1637 containerID=317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm/dashboard-metrics-scraper id=c4597824-fea4-4c2d-bd1d-16bcd531850f name=/runtime.v1.RuntimeService/StartContainer sandboxID=c49a469a94bac7d829326ac0a6ce0a2d1c8f3d62891d4741fdf7d45a2ec4d088
	Nov 01 12:04:15 default-k8s-diff-port-772362 conmon[1635]: conmon 317a3675c8312fcb66af <ninfo>: container 1637 exited with status 1
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.522179988Z" level=info msg="Removing container: 23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435" id=5ec5eca1-06a6-4fa8-87cb-ddd920269453 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.53620554Z" level=info msg="Error loading conmon cgroup of container 23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435: cgroup deleted" id=5ec5eca1-06a6-4fa8-87cb-ddd920269453 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:04:15 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:15.544663758Z" level=info msg="Removed container 23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm/dashboard-metrics-scraper" id=5ec5eca1-06a6-4fa8-87cb-ddd920269453 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.603769244Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.607556083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.607591915Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.607626082Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.615113358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.615146105Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.615168087Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.619469911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.619502108Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.619534043Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.623522919Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 12:04:16 default-k8s-diff-port-772362 crio[648]: time="2025-11-01T12:04:16.623556651Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	317a3675c8312       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   c49a469a94bac       dashboard-metrics-scraper-6ffb444bf9-z2qgm             kubernetes-dashboard
	ccb3e9649abb4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   f5f4af7d6a62e       storage-provisioner                                    kube-system
	866787adebf45       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   46650bc0c4e7c       kubernetes-dashboard-855c9754f9-v9lb6                  kubernetes-dashboard
	e7759628be0ba       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   46263cbf27492       busybox                                                default
	60d058208068e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   18c553f1b7c16       kube-proxy-7bbw7                                       kube-system
	1045dd3947bb8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   34743e04d1d15       kindnet-88g26                                          kube-system
	ae1f673a830aa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   0b3240a42540b       coredns-66bc5c9577-czvv4                               kube-system
	00aed308344f0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   f5f4af7d6a62e       storage-provisioner                                    kube-system
	81b640d642c4a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   92f65748e2dc8       etcd-default-k8s-diff-port-772362                      kube-system
	f96bb403d6b6c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   45b2226b856a1       kube-scheduler-default-k8s-diff-port-772362            kube-system
	302efc83dc595       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   888ffdba79a6a       kube-apiserver-default-k8s-diff-port-772362            kube-system
	53604a992cb8b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   3eda55d992bb7       kube-controller-manager-default-k8s-diff-port-772362   kube-system
	
	
	==> coredns [ae1f673a830aae14249b0aa15c1f704cf4fe946dada0b3da9657525bdd91b06e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49838 - 13649 "HINFO IN 8702520038172837420.7295187200054632376. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014080313s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-772362
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-772362
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=default-k8s-diff-port-772362
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T12_02_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 12:02:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-772362
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 12:04:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 12:04:06 +0000   Sat, 01 Nov 2025 12:01:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 12:04:06 +0000   Sat, 01 Nov 2025 12:01:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 12:04:06 +0000   Sat, 01 Nov 2025 12:01:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 12:04:06 +0000   Sat, 01 Nov 2025 12:02:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-772362
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                42af9bdf-2107-489d-bce0-eb773b707372
	  Boot ID:                    a8ac8503-6b7a-4208-b896-162cdcafe81c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-czvv4                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-772362                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-88g26                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-772362             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-772362    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-7bbw7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-772362             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-z2qgm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v9lb6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-772362 event: Registered Node default-k8s-diff-port-772362 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-772362 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-772362 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node default-k8s-diff-port-772362 event: Registered Node default-k8s-diff-port-772362 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:41] overlayfs: idmapped layers are currently not supported
	[ +17.790204] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:42] overlayfs: idmapped layers are currently not supported
	[ +26.551720] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:44] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:45] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:47] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:49] overlayfs: idmapped layers are currently not supported
	[ +24.600805] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:52] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:54] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:55] overlayfs: idmapped layers are currently not supported
	[ +23.270059] overlayfs: idmapped layers are currently not supported
	[ +19.412513] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:57] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:58] overlayfs: idmapped layers are currently not supported
	[Nov 1 11:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:00] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:01] overlayfs: idmapped layers are currently not supported
	[ +52.263508] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:02] overlayfs: idmapped layers are currently not supported
	[Nov 1 12:03] overlayfs: idmapped layers are currently not supported
	[ +26.269036] overlayfs: idmapped layers are currently not supported
	[ +20.854556] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [81b640d642c4a033a2066adee4e3f0b09cae8a8df5d4558591aa4e5f194359cf] <==
	{"level":"warn","ts":"2025-11-01T12:03:32.894497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:32.920801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:32.950295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:32.988720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.018864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.039982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.054757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.070301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.089138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.116253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.139024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.170432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.196523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.237577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.267756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.299952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.330434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.355415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.362888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.418169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.452701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.478465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.508213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.528631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T12:03:33.593241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47308","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:04:33 up  3:47,  0 user,  load average: 3.06, 3.65, 3.05
	Linux default-k8s-diff-port-772362 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1045dd3947bb80515dc0cc7a58d04eef3d54108be2c3a2a779a3731110c50a24] <==
	I1101 12:03:36.371454       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 12:03:36.378009       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 12:03:36.378151       1 main.go:148] setting mtu 1500 for CNI 
	I1101 12:03:36.378166       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 12:03:36.378187       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T12:03:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 12:03:36.606499       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 12:03:36.606525       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 12:03:36.606533       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 12:03:36.606811       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 12:04:06.603439       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 12:04:06.607406       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 12:04:06.607475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 12:04:06.607587       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 12:04:07.707135       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 12:04:07.707174       1 metrics.go:72] Registering metrics
	I1101 12:04:07.707260       1 controller.go:711] "Syncing nftables rules"
	I1101 12:04:16.603301       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:04:16.603356       1 main.go:301] handling current node
	I1101 12:04:26.605339       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 12:04:26.605374       1 main.go:301] handling current node
	
	
	==> kube-apiserver [302efc83dc595d0d69aa551f9cc9f21aea9f5603913f8c8a601f65423c799822] <==
	I1101 12:03:35.290106       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 12:03:35.290169       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 12:03:35.295688       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 12:03:35.296266       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 12:03:35.296576       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 12:03:35.297117       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 12:03:35.297281       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 12:03:35.297319       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 12:03:35.302822       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 12:03:35.303267       1 aggregator.go:171] initial CRD sync complete...
	I1101 12:03:35.303278       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 12:03:35.303284       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 12:03:35.303290       1 cache.go:39] Caches are synced for autoregister controller
	I1101 12:03:35.388809       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1101 12:03:35.531251       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 12:03:35.679063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 12:03:37.331571       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 12:03:37.462292       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 12:03:37.532500       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 12:03:37.550477       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 12:03:37.689816       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.231.183"}
	I1101 12:03:37.714974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.136.61"}
	I1101 12:03:39.395097       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 12:03:39.872673       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 12:03:39.942705       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [53604a992cb8b97edf6f8b57e315089f1b817fa526ca575f87c8d55f22389249] <==
	I1101 12:03:39.381111       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 12:03:39.381121       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 12:03:39.381129       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 12:03:39.381139       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 12:03:39.392997       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 12:03:39.393398       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 12:03:39.411059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 12:03:39.415261       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 12:03:39.415349       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 12:03:39.415363       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 12:03:39.415425       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 12:03:39.415440       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 12:03:39.415462       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:03:39.416326       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 12:03:39.416357       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 12:03:39.420606       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 12:03:39.423768       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 12:03:39.428452       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 12:03:39.428578       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 12:03:39.428756       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 12:03:39.428892       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 12:03:39.428926       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 12:03:39.438552       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 12:03:39.439831       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 12:03:39.439965       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [60d058208068e15a38ab1917ed435ff30df2904bc304c752ea4a5232e31e1ff9] <==
	I1101 12:03:37.209849       1 server_linux.go:53] "Using iptables proxy"
	I1101 12:03:37.619987       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 12:03:37.720373       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 12:03:37.720412       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 12:03:37.720478       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 12:03:37.867070       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 12:03:37.867190       1 server_linux.go:132] "Using iptables Proxier"
	I1101 12:03:37.871237       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 12:03:37.871601       1 server.go:527] "Version info" version="v1.34.1"
	I1101 12:03:37.871773       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:03:37.873020       1 config.go:200] "Starting service config controller"
	I1101 12:03:37.873086       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 12:03:37.873129       1 config.go:106] "Starting endpoint slice config controller"
	I1101 12:03:37.873155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 12:03:37.873189       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 12:03:37.873216       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 12:03:37.880887       1 config.go:309] "Starting node config controller"
	I1101 12:03:37.880967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 12:03:37.880999       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 12:03:37.973456       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 12:03:37.973566       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 12:03:37.973588       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f96bb403d6b6c123105828e3f84d5ebf20a34529af731f64c66cb9c0669a5093] <==
	I1101 12:03:33.433411       1 serving.go:386] Generated self-signed cert in-memory
	I1101 12:03:36.388992       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 12:03:36.389042       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 12:03:36.418305       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 12:03:36.418368       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 12:03:36.418404       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 12:03:36.418562       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 12:03:36.418418       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:36.419727       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 12:03:36.418424       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:36.419759       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:36.633376       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 12:03:36.648426       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 12:03:36.648584       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:40.173338     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e4488a24-15da-4027-9207-87a2d638e13e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-v9lb6\" (UID: \"e4488a24-15da-4027-9207-87a2d638e13e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v9lb6"
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:40.173899     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hsh\" (UniqueName: \"kubernetes.io/projected/e4488a24-15da-4027-9207-87a2d638e13e-kube-api-access-54hsh\") pod \"kubernetes-dashboard-855c9754f9-v9lb6\" (UID: \"e4488a24-15da-4027-9207-87a2d638e13e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v9lb6"
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:40.174011     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xwcn\" (UniqueName: \"kubernetes.io/projected/616e24fd-597d-46ae-9f4c-55f05922d927-kube-api-access-9xwcn\") pod \"dashboard-metrics-scraper-6ffb444bf9-z2qgm\" (UID: \"616e24fd-597d-46ae-9f4c-55f05922d927\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm"
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:40.174110     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/616e24fd-597d-46ae-9f4c-55f05922d927-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-z2qgm\" (UID: \"616e24fd-597d-46ae-9f4c-55f05922d927\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm"
	Nov 01 12:03:40 default-k8s-diff-port-772362 kubelet[772]: W1101 12:03:40.514813     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/087d99a3919fbfec05a427ca47ba8b0e64cee188ced1394cc244ea1dcec815f0/crio-c49a469a94bac7d829326ac0a6ce0a2d1c8f3d62891d4741fdf7d45a2ec4d088 WatchSource:0}: Error finding container c49a469a94bac7d829326ac0a6ce0a2d1c8f3d62891d4741fdf7d45a2ec4d088: Status 404 returned error can't find the container with id c49a469a94bac7d829326ac0a6ce0a2d1c8f3d62891d4741fdf7d45a2ec4d088
	Nov 01 12:03:44 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:44.206235     772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 12:03:47 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:47.450458     772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v9lb6" podStartSLOduration=2.105514963 podStartE2EDuration="8.450439982s" podCreationTimestamp="2025-11-01 12:03:39 +0000 UTC" firstStartedPulling="2025-11-01 12:03:40.483917093 +0000 UTC m=+11.598584616" lastFinishedPulling="2025-11-01 12:03:46.828842104 +0000 UTC m=+17.943509635" observedRunningTime="2025-11-01 12:03:47.450382308 +0000 UTC m=+18.565049839" watchObservedRunningTime="2025-11-01 12:03:47.450439982 +0000 UTC m=+18.565107505"
	Nov 01 12:03:53 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:53.452580     772 scope.go:117] "RemoveContainer" containerID="e756118195f3e5657015c3f8b4fdc9a267c22c97d5a004951dcb0db78b98f40c"
	Nov 01 12:03:54 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:54.456447     772 scope.go:117] "RemoveContainer" containerID="e756118195f3e5657015c3f8b4fdc9a267c22c97d5a004951dcb0db78b98f40c"
	Nov 01 12:03:54 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:54.456736     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:03:54 default-k8s-diff-port-772362 kubelet[772]: E1101 12:03:54.456879     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:03:55 default-k8s-diff-port-772362 kubelet[772]: I1101 12:03:55.460182     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:03:55 default-k8s-diff-port-772362 kubelet[772]: E1101 12:03:55.460350     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:04:00 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:00.367502     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:04:00 default-k8s-diff-port-772362 kubelet[772]: E1101 12:04:00.367715     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:04:06 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:06.487193     772 scope.go:117] "RemoveContainer" containerID="00aed308344f086574af655c9996a7b641715d301430dc08c96ff996ef60c175"
	Nov 01 12:04:15 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:15.203200     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:04:15 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:15.519318     772 scope.go:117] "RemoveContainer" containerID="23709eebe257750448ed21a6d1dde54d75257662914496d38e1df89add104435"
	Nov 01 12:04:15 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:15.519609     772 scope.go:117] "RemoveContainer" containerID="317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036"
	Nov 01 12:04:15 default-k8s-diff-port-772362 kubelet[772]: E1101 12:04:15.519775     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:04:20 default-k8s-diff-port-772362 kubelet[772]: I1101 12:04:20.362573     772 scope.go:117] "RemoveContainer" containerID="317a3675c8312fcb66afa66e05a9799e3feab250082ba2f6cbc8d9aba138a036"
	Nov 01 12:04:20 default-k8s-diff-port-772362 kubelet[772]: E1101 12:04:20.362757     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z2qgm_kubernetes-dashboard(616e24fd-597d-46ae-9f4c-55f05922d927)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z2qgm" podUID="616e24fd-597d-46ae-9f4c-55f05922d927"
	Nov 01 12:04:28 default-k8s-diff-port-772362 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 12:04:28 default-k8s-diff-port-772362 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 12:04:28 default-k8s-diff-port-772362 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [866787adebf458b16bf91276a5d497a0448a1e79a43137ae5cc98aedb84d2c3c] <==
	2025/11/01 12:03:46 Starting overwatch
	2025/11/01 12:03:46 Using namespace: kubernetes-dashboard
	2025/11/01 12:03:46 Using in-cluster config to connect to apiserver
	2025/11/01 12:03:46 Using secret token for csrf signing
	2025/11/01 12:03:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 12:03:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 12:03:46 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 12:03:46 Generating JWE encryption key
	2025/11/01 12:03:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 12:03:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 12:03:47 Initializing JWE encryption key from synchronized object
	2025/11/01 12:03:47 Creating in-cluster Sidecar client
	2025/11/01 12:03:47 Serving insecurely on HTTP port: 9090
	2025/11/01 12:03:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 12:04:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [00aed308344f086574af655c9996a7b641715d301430dc08c96ff996ef60c175] <==
	I1101 12:03:36.343752       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 12:04:06.418416       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ccb3e9649abb4d3db8b3d243402c03bb237c2ba79fff3fbf00f84ea8b516b9ab] <==
	I1101 12:04:06.551346       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 12:04:06.551516       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 12:04:06.554123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:10.025553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:14.286219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:17.886638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:20.940325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:23.962282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:23.967945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:04:23.968085       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 12:04:23.968262       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772362_83cfce6f-9162-4ade-9202-0f7bca23094b!
	I1101 12:04:23.969296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f733451d-a420-4621-bd46-168ecef6ff2e", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-772362_83cfce6f-9162-4ade-9202-0f7bca23094b became leader
	W1101 12:04:23.974931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:23.980021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 12:04:24.069441       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-772362_83cfce6f-9162-4ade-9202-0f7bca23094b!
	W1101 12:04:25.983424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:25.990550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:27.995018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:28.003482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:30.054248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:30.101254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:32.106594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:32.114539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:34.117984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 12:04:34.180409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362: exit status 2 (397.3111ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-772362 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.21s)
E1101 12:09:49.443052  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:49.449541  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:49.460937  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:49.482500  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:49.523999  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:49.605727  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:49.767999  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:50.089845  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:50.731918  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:52.013332  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:54.575009  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:54.654579  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:09:59.696331  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:09.937828  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:22.354932  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (258/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.81
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.44
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 166.74
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.77
48 TestAddons/StoppedEnableDisable 12.45
49 TestCertOptions 39.97
50 TestCertExpiration 243.31
52 TestForceSystemdFlag 39.85
53 TestForceSystemdEnv 42.85
58 TestErrorSpam/setup 32.8
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.13
61 TestErrorSpam/pause 5.34
62 TestErrorSpam/unpause 5.95
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.49
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.49
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 37.67
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.51
86 TestFunctional/serial/LogsFileCmd 1.49
87 TestFunctional/serial/InvalidService 4.58
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 10.63
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.05
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 26.93
101 TestFunctional/parallel/SSHCmd 0.75
102 TestFunctional/parallel/CpCmd 2.43
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.34
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
113 TestFunctional/parallel/License 0.31
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ProfileCmd/profile_list 0.44
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/MountCmd/any-port 7.26
130 TestFunctional/parallel/MountCmd/specific-port 1.91
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.93
132 TestFunctional/parallel/ServiceCmd/List 0.63
133 TestFunctional/parallel/ServiceCmd/JSONOutput 1.43
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 1
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.95
144 TestFunctional/parallel/ImageCommands/Setup 0.65
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 197.94
163 TestMultiControlPlane/serial/DeployApp 6.39
164 TestMultiControlPlane/serial/PingHostFromPods 1.55
165 TestMultiControlPlane/serial/AddWorkerNode 62.37
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 20.49
169 TestMultiControlPlane/serial/StopSecondaryNode 12.88
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 143.26
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.8
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
176 TestMultiControlPlane/serial/StopCluster 36.13
177 TestMultiControlPlane/serial/RestartCluster 89.41
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.92
179 TestMultiControlPlane/serial/AddSecondaryNode 82.56
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.12
185 TestJSONOutput/start/Command 78.22
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 41.93
211 TestKicCustomNetwork/use_default_bridge_network 37.62
212 TestKicExistingNetwork 34.7
213 TestKicCustomSubnet 36.49
214 TestKicStaticIP 34.36
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.52
219 TestMountStart/serial/StartWithMountFirst 9.42
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.73
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 7.8
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 140.13
231 TestMultiNode/serial/DeployApp2Nodes 4.79
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 59.5
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.82
237 TestMultiNode/serial/StopNode 2.45
238 TestMultiNode/serial/StartAfterStop 8.45
239 TestMultiNode/serial/RestartKeepsNodes 82.36
240 TestMultiNode/serial/DeleteNode 5.68
241 TestMultiNode/serial/StopMultiNode 24.29
242 TestMultiNode/serial/RestartMultiNode 49.42
243 TestMultiNode/serial/ValidateNameConflict 37.85
248 TestPreload 129.26
250 TestScheduledStopUnix 107.47
253 TestInsufficientStorage 13.6
254 TestRunningBinaryUpgrade 49.95
256 TestKubernetesUpgrade 342.08
257 TestMissingContainerUpgrade 104.71
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 43.77
261 TestNoKubernetes/serial/StartWithStopK8s 117.26
262 TestNoKubernetes/serial/Start 8.45
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
264 TestNoKubernetes/serial/ProfileList 31.41
265 TestNoKubernetes/serial/Stop 1.35
266 TestNoKubernetes/serial/StartNoArgs 7.57
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
268 TestStoppedBinaryUpgrade/Setup 0.73
269 TestStoppedBinaryUpgrade/Upgrade 54.74
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
279 TestPause/serial/Start 84.38
280 TestPause/serial/SecondStartNoReconfiguration 29.22
289 TestNetworkPlugins/group/false 5.45
294 TestStartStop/group/old-k8s-version/serial/FirstStart 60.12
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
297 TestStartStop/group/old-k8s-version/serial/Stop 12.05
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 53.96
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
305 TestStartStop/group/no-preload/serial/FirstStart 76.86
307 TestStartStop/group/embed-certs/serial/FirstStart 89.28
308 TestStartStop/group/no-preload/serial/DeployApp 8.39
310 TestStartStop/group/no-preload/serial/Stop 12.03
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/no-preload/serial/SecondStart 51.31
313 TestStartStop/group/embed-certs/serial/DeployApp 10.5
315 TestStartStop/group/embed-certs/serial/Stop 12.6
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/embed-certs/serial/SecondStart 52.09
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.07
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
329 TestStartStop/group/newest-cni/serial/FirstStart 38.33
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 1.4
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
334 TestStartStop/group/newest-cni/serial/SecondStart 18.06
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.53
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.36
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.93
344 TestNetworkPlugins/group/auto/Start 86.78
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
349 TestNetworkPlugins/group/kindnet/Start 55.92
350 TestNetworkPlugins/group/auto/KubeletFlags 0.39
351 TestNetworkPlugins/group/auto/NetCatPod 12.35
352 TestNetworkPlugins/group/auto/DNS 0.2
353 TestNetworkPlugins/group/auto/Localhost 0.15
354 TestNetworkPlugins/group/auto/HairPin 0.16
355 TestNetworkPlugins/group/calico/Start 63.75
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.34
359 TestNetworkPlugins/group/kindnet/DNS 0.24
360 TestNetworkPlugins/group/kindnet/Localhost 0.2
361 TestNetworkPlugins/group/kindnet/HairPin 0.2
362 TestNetworkPlugins/group/custom-flannel/Start 67.8
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.47
365 TestNetworkPlugins/group/calico/NetCatPod 12.35
366 TestNetworkPlugins/group/calico/DNS 0.23
367 TestNetworkPlugins/group/calico/Localhost 0.2
368 TestNetworkPlugins/group/calico/HairPin 0.21
369 TestNetworkPlugins/group/enable-default-cni/Start 75.79
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.36
372 TestNetworkPlugins/group/custom-flannel/DNS 0.21
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
375 TestNetworkPlugins/group/flannel/Start 61.03
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
381 TestNetworkPlugins/group/flannel/ControllerPod 6
382 TestNetworkPlugins/group/bridge/Start 81.69
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
384 TestNetworkPlugins/group/flannel/NetCatPod 11.37
385 TestNetworkPlugins/group/flannel/DNS 0.2
386 TestNetworkPlugins/group/flannel/Localhost 0.18
387 TestNetworkPlugins/group/flannel/HairPin 0.2
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
389 TestNetworkPlugins/group/bridge/NetCatPod 9.26
390 TestNetworkPlugins/group/bridge/DNS 0.21
391 TestNetworkPlugins/group/bridge/Localhost 0.13
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (9.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-186382 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-186382 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.807488429s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 10:48:23.296496  534720 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 10:48:23.296579  534720 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-186382
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-186382: exit status 85 (97.139809ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-186382 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-186382 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:48:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:48:13.539218  534726 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:48:13.539441  534726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:13.539473  534726 out.go:374] Setting ErrFile to fd 2...
	I1101 10:48:13.539494  534726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:13.539772  534726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	W1101 10:48:13.539943  534726 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21830-532863/.minikube/config/config.json: open /home/jenkins/minikube-integration/21830-532863/.minikube/config/config.json: no such file or directory
	I1101 10:48:13.540375  534726 out.go:368] Setting JSON to true
	I1101 10:48:13.541249  534726 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9043,"bootTime":1761985051,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:48:13.541346  534726 start.go:143] virtualization:  
	I1101 10:48:13.545467  534726 out.go:99] [download-only-186382] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1101 10:48:13.545675  534726 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 10:48:13.545777  534726 notify.go:221] Checking for updates...
	I1101 10:48:13.548679  534726 out.go:171] MINIKUBE_LOCATION=21830
	I1101 10:48:13.551789  534726 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:48:13.554677  534726 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 10:48:13.557534  534726 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 10:48:13.560372  534726 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 10:48:13.566033  534726 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 10:48:13.566297  534726 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:48:13.593236  534726 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:48:13.593369  534726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:13.655851  534726 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 10:48:13.646727791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:13.655967  534726 docker.go:319] overlay module found
	I1101 10:48:13.659162  534726 out.go:99] Using the docker driver based on user configuration
	I1101 10:48:13.659218  534726 start.go:309] selected driver: docker
	I1101 10:48:13.659228  534726 start.go:930] validating driver "docker" against <nil>
	I1101 10:48:13.659345  534726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:13.714510  534726 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 10:48:13.70557721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:13.714669  534726 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:48:13.714944  534726 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 10:48:13.715103  534726 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 10:48:13.718248  534726 out.go:171] Using Docker driver with root privileges
	I1101 10:48:13.721255  534726 cni.go:84] Creating CNI manager for ""
	I1101 10:48:13.721332  534726 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:48:13.721347  534726 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:48:13.721450  534726 start.go:353] cluster config:
	{Name:download-only-186382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-186382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:48:13.724444  534726 out.go:99] Starting "download-only-186382" primary control-plane node in "download-only-186382" cluster
	I1101 10:48:13.724475  534726 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:48:13.727324  534726 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:48:13.727386  534726 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:48:13.727450  534726 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:48:13.743007  534726 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 10:48:13.743213  534726 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 10:48:13.743311  534726 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 10:48:13.783362  534726 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 10:48:13.783394  534726 cache.go:59] Caching tarball of preloaded images
	I1101 10:48:13.783553  534726 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:48:13.786930  534726 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 10:48:13.786974  534726 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1101 10:48:13.875298  534726 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1101 10:48:13.875433  534726 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 10:48:16.785656  534726 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 10:48:16.786065  534726 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/download-only-186382/config.json ...
	I1101 10:48:16.786101  534726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/download-only-186382/config.json: {Name:mkd541d50875952cb770f8fe95c55897dae31224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:48:16.786290  534726 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:48:16.786467  534726 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21830-532863/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-186382 host does not exist
	  To start a cluster, run: "minikube start -p download-only-186382"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-186382
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-491444 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-491444 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.438410037s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 10:48:28.183238  534720 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 10:48:28.183281  534720 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-532863/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-491444
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-491444: exit status 85 (92.538844ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-186382 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-186382 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ delete  │ -p download-only-186382                                                                                                                                                   │ download-only-186382 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │ 01 Nov 25 10:48 UTC │
	│ start   │ -o=json --download-only -p download-only-491444 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-491444 │ jenkins │ v1.37.0 │ 01 Nov 25 10:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:48:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:48:23.787283  534924 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:48:23.787410  534924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:23.787422  534924 out.go:374] Setting ErrFile to fd 2...
	I1101 10:48:23.787427  534924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:48:23.787686  534924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 10:48:23.788108  534924 out.go:368] Setting JSON to true
	I1101 10:48:23.788937  534924 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9053,"bootTime":1761985051,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 10:48:23.789006  534924 start.go:143] virtualization:  
	I1101 10:48:23.792562  534924 out.go:99] [download-only-491444] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:48:23.792800  534924 notify.go:221] Checking for updates...
	I1101 10:48:23.795753  534924 out.go:171] MINIKUBE_LOCATION=21830
	I1101 10:48:23.798841  534924 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:48:23.801911  534924 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 10:48:23.804890  534924 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 10:48:23.807758  534924 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 10:48:23.813483  534924 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 10:48:23.813805  534924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:48:23.835363  534924 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:48:23.835508  534924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:23.894145  534924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 10:48:23.885105394 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:23.894255  534924 docker.go:319] overlay module found
	I1101 10:48:23.897250  534924 out.go:99] Using the docker driver based on user configuration
	I1101 10:48:23.897292  534924 start.go:309] selected driver: docker
	I1101 10:48:23.897299  534924 start.go:930] validating driver "docker" against <nil>
	I1101 10:48:23.897407  534924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:48:23.970881  534924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 10:48:23.961728187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:48:23.971046  534924 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:48:23.971326  534924 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 10:48:23.971477  534924 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 10:48:23.974537  534924 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-491444 host does not exist
	  To start a cluster, run: "minikube start -p download-only-491444"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-491444
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 10:48:29.322639  534720 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-212672 --alsologtostderr --binary-mirror http://127.0.0.1:46695 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-212672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-212672
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-780397
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-780397: exit status 85 (73.122719ms)

                                                
                                                
-- stdout --
	* Profile "addons-780397" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-780397"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-780397
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-780397: exit status 85 (81.536818ms)

                                                
                                                
-- stdout --
	* Profile "addons-780397" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-780397"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (166.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-780397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-780397 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m46.741192723s)
--- PASS: TestAddons/Setup (166.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-780397 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-780397 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.77s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-780397 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-780397 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b044f325-46a3-4863-9631-06680893c991] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b044f325-46a3-4863-9631-06680893c991] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.002727819s
addons_test.go:694: (dbg) Run:  kubectl --context addons-780397 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-780397 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-780397 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-780397 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-780397
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-780397: (12.168407529s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-780397
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-780397
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-780397
--- PASS: TestAddons/StoppedEnableDisable (12.45s)

                                                
                                    
x
+
TestCertOptions (39.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-505831 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-505831 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.14344802s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-505831 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-505831 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-505831 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-505831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-505831
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-505831: (2.078030551s)
--- PASS: TestCertOptions (39.97s)

                                                
                                    
x
+
TestCertExpiration (243.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-534694 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-534694 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.811405852s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-534694 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-534694 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.317140223s)
helpers_test.go:175: Cleaning up "cert-expiration-534694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-534694
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-534694: (3.185214388s)
--- PASS: TestCertExpiration (243.31s)

                                                
                                    
x
+
TestForceSystemdFlag (39.85s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-643844 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-643844 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.686251604s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-643844 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-643844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-643844
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-643844: (2.748934593s)
--- PASS: TestForceSystemdFlag (39.85s)

                                                
                                    
x
+
TestForceSystemdEnv (42.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-857548 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-857548 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.974164824s)
helpers_test.go:175: Cleaning up "force-systemd-env-857548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-857548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-857548: (2.875954003s)
--- PASS: TestForceSystemdEnv (42.85s)

                                                
                                    
x
+
TestErrorSpam/setup (32.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-232738 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-232738 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-232738 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-232738 --driver=docker  --container-runtime=crio: (32.795212374s)
--- PASS: TestErrorSpam/setup (32.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (5.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 pause: exit status 80 (1.63566614s)

                                                
                                                
-- stdout --
	* Pausing node nospam-232738 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:55:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 pause: exit status 80 (1.881444104s)

                                                
                                                
-- stdout --
	* Pausing node nospam-232738 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:55:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 pause: exit status 80 (1.824957742s)

                                                
                                                
-- stdout --
	* Pausing node nospam-232738 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:55:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 unpause: exit status 80 (2.025714427s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-232738 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:55:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 unpause: exit status 80 (2.190179642s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-232738 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:55:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 unpause: exit status 80 (1.736657216s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-232738 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:55:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.95s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 stop: (1.31175815s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232738 --log_dir /tmp/nospam-232738 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21830-532863/.minikube/files/etc/test/nested/copy/534720/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-203469 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1101 10:56:17.528039  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:17.534402  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:17.545749  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:17.567108  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:17.608496  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:17.689917  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:17.851247  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:18.172931  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:18.814429  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:20.096006  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:22.658985  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:27.781208  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:38.022653  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:58.504494  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-203469 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m22.490096535s)
--- PASS: TestFunctional/serial/StartWithProxy (82.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 10:57:03.064426  534720 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-203469 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-203469 --alsologtostderr -v=8: (27.490926351s)
functional_test.go:678: soft start took 27.491443634s for "functional-203469" cluster.
I1101 10:57:30.555626  534720 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-203469 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-203469 cache add registry.k8s.io/pause:3.1: (1.153211673s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-203469 cache add registry.k8s.io/pause:3.3: (1.179030562s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-203469 cache add registry.k8s.io/pause:latest: (1.110181679s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-203469 /tmp/TestFunctionalserialCacheCmdcacheadd_local31993878/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 cache add minikube-local-cache-test:functional-203469
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 cache delete minikube-local-cache-test:functional-203469
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-203469
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.055088ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 kubectl -- --context functional-203469 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-203469 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-203469 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 10:57:39.465951  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-203469 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.665852825s)
functional_test.go:776: restart took 37.665962587s for "functional-203469" cluster.
I1101 10:58:15.585545  534720 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-203469 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-203469 logs: (1.511945092s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 logs --file /tmp/TestFunctionalserialLogsFileCmd2571616033/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-203469 logs --file /tmp/TestFunctionalserialLogsFileCmd2571616033/001/logs.txt: (1.49035843s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-203469 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-203469
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-203469: exit status 115 (390.107975ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32168 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-203469 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 config get cpus: exit status 14 (85.43171ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 config get cpus: exit status 14 (79.116604ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-203469 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-203469 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 561072: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-203469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-203469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.318188ms)

                                                
                                                
-- stdout --
	* [functional-203469] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:08:52.424347  560608 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:08:52.424485  560608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:08:52.424554  560608 out.go:374] Setting ErrFile to fd 2...
	I1101 11:08:52.424567  560608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:08:52.424974  560608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:08:52.425445  560608 out.go:368] Setting JSON to false
	I1101 11:08:52.426518  560608 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10282,"bootTime":1761985051,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:08:52.426588  560608 start.go:143] virtualization:  
	I1101 11:08:52.429761  560608 out.go:179] * [functional-203469] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:08:52.433594  560608 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:08:52.433668  560608 notify.go:221] Checking for updates...
	I1101 11:08:52.439780  560608 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:08:52.442554  560608 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:08:52.445549  560608 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:08:52.448292  560608 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:08:52.451313  560608 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:08:52.455030  560608 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:08:52.455656  560608 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:08:52.480147  560608 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:08:52.480247  560608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:08:52.547643  560608 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:08:52.538414021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:08:52.547765  560608 docker.go:319] overlay module found
	I1101 11:08:52.550957  560608 out.go:179] * Using the docker driver based on existing profile
	I1101 11:08:52.553820  560608 start.go:309] selected driver: docker
	I1101 11:08:52.553837  560608 start.go:930] validating driver "docker" against &{Name:functional-203469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-203469 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:08:52.553931  560608 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:08:52.557307  560608 out.go:203] 
	W1101 11:08:52.560094  560608 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 11:08:52.562916  560608 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-203469 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-203469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-203469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (205.964877ms)

                                                
                                                
-- stdout --
	* [functional-203469] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:08:52.226687  560561 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:08:52.226812  560561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:08:52.226821  560561 out.go:374] Setting ErrFile to fd 2...
	I1101 11:08:52.226826  560561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:08:52.227208  560561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:08:52.227578  560561 out.go:368] Setting JSON to false
	I1101 11:08:52.228436  560561 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10282,"bootTime":1761985051,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:08:52.228507  560561 start.go:143] virtualization:  
	I1101 11:08:52.232320  560561 out.go:179] * [functional-203469] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1101 11:08:52.235518  560561 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:08:52.235553  560561 notify.go:221] Checking for updates...
	I1101 11:08:52.241378  560561 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:08:52.244373  560561 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:08:52.247923  560561 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:08:52.251691  560561 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:08:52.254756  560561 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:08:52.258306  560561 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:08:52.258893  560561 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:08:52.293097  560561 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:08:52.293211  560561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:08:52.353022  560561 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:08:52.342905739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:08:52.353133  560561 docker.go:319] overlay module found
	I1101 11:08:52.356272  560561 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 11:08:52.359236  560561 start.go:309] selected driver: docker
	I1101 11:08:52.359258  560561 start.go:930] validating driver "docker" against &{Name:functional-203469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-203469 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:08:52.359372  560561 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:08:52.362890  560561 out.go:203] 
	W1101 11:08:52.365653  560561 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 11:08:52.368553  560561 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e6b24502-fa7b-4cbb-a904-fd3e804802a9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003306549s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-203469 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-203469 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-203469 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-203469 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [01edc5eb-5856-4119-b969-64854cf24412] Pending
helpers_test.go:352: "sp-pod" [01edc5eb-5856-4119-b969-64854cf24412] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [01edc5eb-5856-4119-b969-64854cf24412] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003242621s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-203469 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-203469 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-203469 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1ebc8433-40e1-4407-8619-7f6fa33f79cf] Pending
helpers_test.go:352: "sp-pod" [1ebc8433-40e1-4407-8619-7f6fa33f79cf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003598357s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-203469 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh -n functional-203469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 cp functional-203469:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1747816278/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh -n functional-203469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh -n functional-203469 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/534720/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo cat /etc/test/nested/copy/534720/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/534720.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo cat /etc/ssl/certs/534720.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/534720.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo cat /usr/share/ca-certificates/534720.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5347202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo cat /etc/ssl/certs/5347202.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5347202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo cat /usr/share/ca-certificates/5347202.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-203469 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 ssh "sudo systemctl is-active docker": exit status 1 (350.531595ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 ssh "sudo systemctl is-active containerd": exit status 1 (398.306817ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-203469 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-203469 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-203469 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-203469 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 557317: os: process already finished
helpers_test.go:519: unable to terminate pid 557081: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-203469 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-203469 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [54131a69-29b4-4c2d-8ccf-0672f8324189] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [54131a69-29b4-4c2d-8ccf-0672f8324189] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.007552939s
I1101 10:58:34.705992  534720 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-203469 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.185.249 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-203469 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "379.203964ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "64.117343ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "376.765888ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "66.545262ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdany-port1106610005/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761995320024040526" to /tmp/TestFunctionalparallelMountCmdany-port1106610005/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761995320024040526" to /tmp/TestFunctionalparallelMountCmdany-port1106610005/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761995320024040526" to /tmp/TestFunctionalparallelMountCmdany-port1106610005/001/test-1761995320024040526
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (368.639111ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 11:08:40.393043  534720 retry.go:31] will retry after 741.211256ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 11:08 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 11:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 11:08 test-1761995320024040526
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh cat /mount-9p/test-1761995320024040526
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-203469 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [5b6bb8ca-09ef-424c-85ef-d732fe157fe5] Pending
helpers_test.go:352: "busybox-mount" [5b6bb8ca-09ef-424c-85ef-d732fe157fe5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [5b6bb8ca-09ef-424c-85ef-d732fe157fe5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [5b6bb8ca-09ef-424c-85ef-d732fe157fe5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004238713s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-203469 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdany-port1106610005/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdspecific-port3296438858/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (329.744962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 11:08:47.615227  534720 retry.go:31] will retry after 524.329362ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdspecific-port3296438858/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 ssh "sudo umount -f /mount-9p": exit status 1 (280.953008ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-203469 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdspecific-port3296438858/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2890215840/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2890215840/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2890215840/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T" /mount1: exit status 1 (587.060449ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 11:08:49.786461  534720 retry.go:31] will retry after 392.237103ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-203469 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2890215840/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2890215840/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-203469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2890215840/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-203469 service list -o json: (1.431392899s)
functional_test.go:1504: Took "1.431492994s" to run "out/minikube-linux-arm64 -p functional-203469 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-203469 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-203469 image ls --format short --alsologtostderr:
I1101 11:09:08.077056  563326 out.go:360] Setting OutFile to fd 1 ...
I1101 11:09:08.077178  563326 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.077191  563326 out.go:374] Setting ErrFile to fd 2...
I1101 11:09:08.077196  563326 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.077469  563326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
I1101 11:09:08.078178  563326 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.078343  563326 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.078921  563326 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
I1101 11:09:08.102523  563326 ssh_runner.go:195] Run: systemctl --version
I1101 11:09:08.102590  563326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
I1101 11:09:08.133934  563326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
I1101 11:09:08.245326  563326 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-203469 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/library/nginx                 │ latest             │ 46fabdd7f288c │ 176MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-203469 image ls --format table --alsologtostderr:
I1101 11:09:08.363080  563405 out.go:360] Setting OutFile to fd 1 ...
I1101 11:09:08.363312  563405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.363338  563405 out.go:374] Setting ErrFile to fd 2...
I1101 11:09:08.363354  563405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.363646  563405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
I1101 11:09:08.372202  563405 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.372370  563405 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.373000  563405 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
I1101 11:09:08.398994  563405 ssh_runner.go:195] Run: systemctl --version
I1101 11:09:08.399046  563405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
I1101 11:09:08.425005  563405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
I1101 11:09:08.536492  563405 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-203469 image ls --format json --alsologtostderr:
[{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797","repoDigests":["docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["doc
ker.io/library/nginx:latest"],"size":"176006680"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854
ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTa
gs":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"1611cd07b61d57dbbfebe6
db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5
196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-203469 image ls --format json --alsologtostderr:
I1101 11:09:08.370613  563400 out.go:360] Setting OutFile to fd 1 ...
I1101 11:09:08.371186  563400 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.371219  563400 out.go:374] Setting ErrFile to fd 2...
I1101 11:09:08.371239  563400 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.371541  563400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
I1101 11:09:08.372320  563400 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.372502  563400 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.373003  563400 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
I1101 11:09:08.404958  563400 ssh_runner.go:195] Run: systemctl --version
I1101 11:09:08.405015  563400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
I1101 11:09:08.443344  563400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
I1101 11:09:08.564718  563400 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-203469 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797
repoDigests:
- docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "176006680"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-203469 image ls --format yaml --alsologtostderr:
I1101 11:09:08.070975  563327 out.go:360] Setting OutFile to fd 1 ...
I1101 11:09:08.071536  563327 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.071666  563327 out.go:374] Setting ErrFile to fd 2...
I1101 11:09:08.071702  563327 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.072021  563327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
I1101 11:09:08.073012  563327 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.074921  563327 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.075499  563327 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
I1101 11:09:08.099451  563327 ssh_runner.go:195] Run: systemctl --version
I1101 11:09:08.099508  563327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
I1101 11:09:08.121968  563327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
I1101 11:09:08.232630  563327 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-203469 ssh pgrep buildkitd: exit status 1 (315.170527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image build -t localhost/my-image:functional-203469 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-203469 image build -t localhost/my-image:functional-203469 testdata/build --alsologtostderr: (3.394625995s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-203469 image build -t localhost/my-image:functional-203469 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ff13bd5b117
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-203469
--> b4350c7e3f9
Successfully tagged localhost/my-image:functional-203469
b4350c7e3f9111805b65d9c69716da0c183111130ecf9cd8da97e2f41d243d98
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-203469 image build -t localhost/my-image:functional-203469 testdata/build --alsologtostderr:
I1101 11:09:08.947633  563536 out.go:360] Setting OutFile to fd 1 ...
I1101 11:09:08.948436  563536 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.948475  563536 out.go:374] Setting ErrFile to fd 2...
I1101 11:09:08.948498  563536 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 11:09:08.948800  563536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
I1101 11:09:08.949655  563536 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.950492  563536 config.go:182] Loaded profile config "functional-203469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 11:09:08.951072  563536 cli_runner.go:164] Run: docker container inspect functional-203469 --format={{.State.Status}}
I1101 11:09:08.969826  563536 ssh_runner.go:195] Run: systemctl --version
I1101 11:09:08.969880  563536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-203469
I1101 11:09:08.988412  563536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/functional-203469/id_rsa Username:docker}
I1101 11:09:09.096376  563536 build_images.go:162] Building image from path: /tmp/build.1264101951.tar
I1101 11:09:09.096448  563536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 11:09:09.104722  563536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1264101951.tar
I1101 11:09:09.108492  563536 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1264101951.tar: stat -c "%s %y" /var/lib/minikube/build/build.1264101951.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1264101951.tar': No such file or directory
I1101 11:09:09.108524  563536 ssh_runner.go:362] scp /tmp/build.1264101951.tar --> /var/lib/minikube/build/build.1264101951.tar (3072 bytes)
I1101 11:09:09.127478  563536 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1264101951
I1101 11:09:09.136546  563536 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1264101951 -xf /var/lib/minikube/build/build.1264101951.tar
I1101 11:09:09.145342  563536 crio.go:315] Building image: /var/lib/minikube/build/build.1264101951
I1101 11:09:09.145492  563536 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-203469 /var/lib/minikube/build/build.1264101951 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1101 11:09:12.266123  563536 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-203469 /var/lib/minikube/build/build.1264101951 --cgroup-manager=cgroupfs: (3.120603648s)
I1101 11:09:12.266195  563536 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1264101951
I1101 11:09:12.274165  563536 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1264101951.tar
I1101 11:09:12.282891  563536 build_images.go:218] Built localhost/my-image:functional-203469 from /tmp/build.1264101951.tar
I1101 11:09:12.282923  563536 build_images.go:134] succeeded building to: functional-203469
I1101 11:09:12.282929  563536 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-203469
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image rm kicbase/echo-server:functional-203469 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-203469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-203469
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-203469
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-203469
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 11:11:17.525676  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m17.020489154s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (197.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 kubectl -- rollout status deployment/busybox: (3.545717903s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-7m8cp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-lm6r8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-x679v -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-7m8cp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-lm6r8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-x679v -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-7m8cp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-lm6r8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-x679v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-7m8cp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-7m8cp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-lm6r8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-lm6r8 -- sh -c "ping -c 1 192.168.49.1"
E1101 11:12:40.591767  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-x679v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 kubectl -- exec busybox-7b57f96db7-x679v -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 node add --alsologtostderr -v 5
E1101 11:13:25.235212  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:25.241855  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:25.253361  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:25.274896  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:25.316382  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:25.397812  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:25.559283  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:25.880833  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:26.522196  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:27.803850  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:30.365746  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:13:35.487457  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 node add --alsologtostderr -v 5: (1m1.292717849s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: (1.079904527s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-472819 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.082142333s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --output json --alsologtostderr -v 5
E1101 11:13:45.729095  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 status --output json --alsologtostderr -v 5: (1.201139014s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp testdata/cp-test.txt ha-472819:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3224874569/001/cp-test_ha-472819.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819:/home/docker/cp-test.txt ha-472819-m02:/home/docker/cp-test_ha-472819_ha-472819-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m02 "sudo cat /home/docker/cp-test_ha-472819_ha-472819-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819:/home/docker/cp-test.txt ha-472819-m03:/home/docker/cp-test_ha-472819_ha-472819-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m03 "sudo cat /home/docker/cp-test_ha-472819_ha-472819-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819:/home/docker/cp-test.txt ha-472819-m04:/home/docker/cp-test_ha-472819_ha-472819-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m04 "sudo cat /home/docker/cp-test_ha-472819_ha-472819-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp testdata/cp-test.txt ha-472819-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3224874569/001/cp-test_ha-472819-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m02:/home/docker/cp-test.txt ha-472819:/home/docker/cp-test_ha-472819-m02_ha-472819.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819 "sudo cat /home/docker/cp-test_ha-472819-m02_ha-472819.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m02:/home/docker/cp-test.txt ha-472819-m03:/home/docker/cp-test_ha-472819-m02_ha-472819-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m03 "sudo cat /home/docker/cp-test_ha-472819-m02_ha-472819-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m02:/home/docker/cp-test.txt ha-472819-m04:/home/docker/cp-test_ha-472819-m02_ha-472819-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m04 "sudo cat /home/docker/cp-test_ha-472819-m02_ha-472819-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp testdata/cp-test.txt ha-472819-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3224874569/001/cp-test_ha-472819-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt ha-472819:/home/docker/cp-test_ha-472819-m03_ha-472819.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819 "sudo cat /home/docker/cp-test_ha-472819-m03_ha-472819.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt ha-472819-m02:/home/docker/cp-test_ha-472819-m03_ha-472819-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m02 "sudo cat /home/docker/cp-test_ha-472819-m03_ha-472819-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m03:/home/docker/cp-test.txt ha-472819-m04:/home/docker/cp-test_ha-472819-m03_ha-472819-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m04 "sudo cat /home/docker/cp-test_ha-472819-m03_ha-472819-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp testdata/cp-test.txt ha-472819-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3224874569/001/cp-test_ha-472819-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt ha-472819:/home/docker/cp-test_ha-472819-m04_ha-472819.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819 "sudo cat /home/docker/cp-test_ha-472819-m04_ha-472819.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt ha-472819-m02:/home/docker/cp-test_ha-472819-m04_ha-472819-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m02 "sudo cat /home/docker/cp-test_ha-472819-m04_ha-472819-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 cp ha-472819-m04:/home/docker/cp-test.txt ha-472819-m03:/home/docker/cp-test_ha-472819-m04_ha-472819-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 ssh -n ha-472819-m03 "sudo cat /home/docker/cp-test_ha-472819-m04_ha-472819-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 node stop m02 --alsologtostderr -v 5
E1101 11:14:06.210917  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 node stop m02 --alsologtostderr -v 5: (12.078602651s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 7 (801.876162ms)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-472819-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-472819-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:14:17.448066  578291 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:14:17.448243  578291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:14:17.448274  578291 out.go:374] Setting ErrFile to fd 2...
	I1101 11:14:17.448294  578291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:14:17.448608  578291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:14:17.448837  578291 out.go:368] Setting JSON to false
	I1101 11:14:17.448898  578291 mustload.go:66] Loading cluster: ha-472819
	I1101 11:14:17.448922  578291 notify.go:221] Checking for updates...
	I1101 11:14:17.449390  578291 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:14:17.449431  578291 status.go:174] checking status of ha-472819 ...
	I1101 11:14:17.450054  578291 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:14:17.471679  578291 status.go:371] ha-472819 host status = "Running" (err=<nil>)
	I1101 11:14:17.471707  578291 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:14:17.471996  578291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819
	I1101 11:14:17.498566  578291 host.go:66] Checking if "ha-472819" exists ...
	I1101 11:14:17.498873  578291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:14:17.498917  578291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819
	I1101 11:14:17.520788  578291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819/id_rsa Username:docker}
	I1101 11:14:17.627272  578291 ssh_runner.go:195] Run: systemctl --version
	I1101 11:14:17.633609  578291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:14:17.647099  578291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:14:17.724576  578291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-01 11:14:17.715198057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:14:17.725128  578291 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:14:17.725169  578291 api_server.go:166] Checking apiserver status ...
	I1101 11:14:17.725219  578291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:14:17.738025  578291 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1101 11:14:17.747426  578291 api_server.go:182] apiserver freezer: "10:freezer:/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8"
	I1101 11:14:17.747504  578291 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/66de5fe90fef65b9b7fdcec88f1cb31289b5fd1d95cc881b8beb6ec5f94ceb5c/crio/crio-91af80c077c55f22c55a82cba007fef6ec8fa3f92d010ceb23da188210f136c8/freezer.state
	I1101 11:14:17.755622  578291 api_server.go:204] freezer state: "THAWED"
	I1101 11:14:17.755656  578291 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:14:17.765489  578291 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:14:17.765522  578291 status.go:463] ha-472819 apiserver status = Running (err=<nil>)
	I1101 11:14:17.765535  578291 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:14:17.765556  578291 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:14:17.765902  578291 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:14:17.786571  578291 status.go:371] ha-472819-m02 host status = "Stopped" (err=<nil>)
	I1101 11:14:17.786597  578291 status.go:384] host is not running, skipping remaining checks
	I1101 11:14:17.786604  578291 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:14:17.786625  578291 status.go:174] checking status of ha-472819-m03 ...
	I1101 11:14:17.786940  578291 cli_runner.go:164] Run: docker container inspect ha-472819-m03 --format={{.State.Status}}
	I1101 11:14:17.807385  578291 status.go:371] ha-472819-m03 host status = "Running" (err=<nil>)
	I1101 11:14:17.807415  578291 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:14:17.807749  578291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m03
	I1101 11:14:17.825859  578291 host.go:66] Checking if "ha-472819-m03" exists ...
	I1101 11:14:17.826267  578291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:14:17.826316  578291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m03
	I1101 11:14:17.844613  578291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m03/id_rsa Username:docker}
	I1101 11:14:17.951556  578291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:14:17.965040  578291 kubeconfig.go:125] found "ha-472819" server: "https://192.168.49.254:8443"
	I1101 11:14:17.965070  578291 api_server.go:166] Checking apiserver status ...
	I1101 11:14:17.965113  578291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:14:17.977525  578291 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1209/cgroup
	I1101 11:14:17.986548  578291 api_server.go:182] apiserver freezer: "10:freezer:/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d"
	I1101 11:14:17.986623  578291 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06a2c0e4705765059670476a3146db27cb1469d9b4f5d96e154163daa8d67a1b/crio/crio-5ee73480d8010298d57c0d7ed1d838c132b3844d5fd13d3bc1014a24898c680d/freezer.state
	I1101 11:14:17.993915  578291 api_server.go:204] freezer state: "THAWED"
	I1101 11:14:17.993945  578291 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:14:18.002225  578291 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:14:18.002271  578291 status.go:463] ha-472819-m03 apiserver status = Running (err=<nil>)
	I1101 11:14:18.002307  578291 status.go:176] ha-472819-m03 status: &{Name:ha-472819-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:14:18.002358  578291 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:14:18.002747  578291 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:14:18.027516  578291 status.go:371] ha-472819-m04 host status = "Running" (err=<nil>)
	I1101 11:14:18.027554  578291 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:14:18.027852  578291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-472819-m04
	I1101 11:14:18.048120  578291 host.go:66] Checking if "ha-472819-m04" exists ...
	I1101 11:14:18.048493  578291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:14:18.048549  578291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-472819-m04
	I1101 11:14:18.069379  578291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/ha-472819-m04/id_rsa Username:docker}
	I1101 11:14:18.175158  578291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:14:18.189219  578291 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 stop --alsologtostderr -v 5
E1101 11:23:25.240960  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 stop --alsologtostderr -v 5: (37.403194999s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 start --wait true --alsologtostderr -v 5: (1m45.675585408s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 node delete m03 --alsologtostderr -v 5: (10.818748481s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 stop --alsologtostderr -v 5: (36.017856024s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: exit status 7 (116.162702ms)

                                                
                                                
-- stdout --
	ha-472819
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-472819-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-472819-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:26:00.818756  592582 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:26:00.818968  592582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:26:00.818995  592582 out.go:374] Setting ErrFile to fd 2...
	I1101 11:26:00.819013  592582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:26:00.819300  592582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:26:00.819539  592582 out.go:368] Setting JSON to false
	I1101 11:26:00.819602  592582 mustload.go:66] Loading cluster: ha-472819
	I1101 11:26:00.819664  592582 notify.go:221] Checking for updates...
	I1101 11:26:00.820625  592582 config.go:182] Loaded profile config "ha-472819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:26:00.820665  592582 status.go:174] checking status of ha-472819 ...
	I1101 11:26:00.821269  592582 cli_runner.go:164] Run: docker container inspect ha-472819 --format={{.State.Status}}
	I1101 11:26:00.839643  592582 status.go:371] ha-472819 host status = "Stopped" (err=<nil>)
	I1101 11:26:00.839667  592582 status.go:384] host is not running, skipping remaining checks
	I1101 11:26:00.839674  592582 status.go:176] ha-472819 status: &{Name:ha-472819 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:26:00.839707  592582 status.go:174] checking status of ha-472819-m02 ...
	I1101 11:26:00.840014  592582 cli_runner.go:164] Run: docker container inspect ha-472819-m02 --format={{.State.Status}}
	I1101 11:26:00.863262  592582 status.go:371] ha-472819-m02 host status = "Stopped" (err=<nil>)
	I1101 11:26:00.863288  592582 status.go:384] host is not running, skipping remaining checks
	I1101 11:26:00.863296  592582 status.go:176] ha-472819-m02 status: &{Name:ha-472819-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:26:00.863315  592582 status.go:174] checking status of ha-472819-m04 ...
	I1101 11:26:00.863604  592582 cli_runner.go:164] Run: docker container inspect ha-472819-m04 --format={{.State.Status}}
	I1101 11:26:00.885861  592582 status.go:371] ha-472819-m04 host status = "Stopped" (err=<nil>)
	I1101 11:26:00.885881  592582 status.go:384] host is not running, skipping remaining checks
	I1101 11:26:00.885895  592582 status.go:176] ha-472819-m04 status: &{Name:ha-472819-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (89.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 11:26:17.525802  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m28.34803517s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (89.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 node add --control-plane --alsologtostderr -v 5
E1101 11:28:25.235241  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 node add --control-plane --alsologtostderr -v 5: (1m21.496101303s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-472819 status --alsologtostderr -v 5: (1.065514422s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.120527993s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-276323 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1101 11:29:20.593379  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:29:48.298289  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-276323 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m18.207068552s)
--- PASS: TestJSONOutput/start/Command (78.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-276323 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-276323 --output=json --user=testUser: (5.830627516s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-505179 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-505179 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (103.585288ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a5fe774e-a86c-41fb-8541-f1f1cb8ef087","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-505179] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc526640-ef7a-4ded-9d41-c7d05d99ba6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21830"}}
	{"specversion":"1.0","id":"763fef18-b7b1-458a-9c9b-14b59df6c466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27109ea5-c16a-4d78-8d36-4b3ee654d874","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig"}}
	{"specversion":"1.0","id":"dd4bf95b-1a2b-4ebd-ba83-19c4e07cc1ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube"}}
	{"specversion":"1.0","id":"2c4cb3e0-ecab-49a5-bbf6-edb3b72f06e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"83c69017-6864-457f-8ee1-771324da7dfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"13aec960-a3d3-40ee-ae8b-61b49723059b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-505179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-505179
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-386113 --network=
E1101 11:31:17.525851  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-386113 --network=: (39.739108374s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-386113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-386113
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-386113: (2.164935575s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-554156 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-554156 --network=bridge: (35.524584967s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-554156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-554156
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-554156: (2.07361192s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.62s)

                                                
                                    
x
+
TestKicExistingNetwork (34.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 11:31:58.641217  534720 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 11:31:58.656585  534720 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 11:31:58.656668  534720 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 11:31:58.656687  534720 cli_runner.go:164] Run: docker network inspect existing-network
W1101 11:31:58.674338  534720 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 11:31:58.674373  534720 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 11:31:58.674389  534720 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 11:31:58.674502  534720 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 11:31:58.696902  534720 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fad877b9a6cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:42:a4:0d:8c:c4:a0} reservation:<nil>}
I1101 11:31:58.697241  534720 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019331c0}
I1101 11:31:58.697268  534720 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 11:31:58.697319  534720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 11:31:58.756337  534720 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-360967 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-360967 --network=existing-network: (32.424750933s)
helpers_test.go:175: Cleaning up "existing-network-360967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-360967
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-360967: (2.126965138s)
I1101 11:32:33.325232  534720 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.70s)

                                                
                                    
x
+
TestKicCustomSubnet (36.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-327596 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-327596 --subnet=192.168.60.0/24: (34.261522125s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-327596 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-327596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-327596
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-327596: (2.200811218s)
--- PASS: TestKicCustomSubnet (36.49s)

                                                
                                    
x
+
TestKicStaticIP (34.36s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-288203 --static-ip=192.168.200.200
E1101 11:33:25.237857  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-288203 --static-ip=192.168.200.200: (31.937326482s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-288203 ip
helpers_test.go:175: Cleaning up "static-ip-288203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-288203
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-288203: (2.27459996s)
--- PASS: TestKicStaticIP (34.36s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-752716 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-752716 --driver=docker  --container-runtime=crio: (31.434960518s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-755473 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-755473 --driver=docker  --container-runtime=crio: (36.370561155s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-752716
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-755473
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-755473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-755473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-755473: (2.209814909s)
helpers_test.go:175: Cleaning up "first-752716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-752716
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-752716: (2.050652829s)
--- PASS: TestMinikubeProfile (73.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-783712 --memory=3072 --mount-string /tmp/TestMountStartserial3058169636/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-783712 --memory=3072 --mount-string /tmp/TestMountStartserial3058169636/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.415925539s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-783712 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-785615 --memory=3072 --mount-string /tmp/TestMountStartserial3058169636/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-785615 --memory=3072 --mount-string /tmp/TestMountStartserial3058169636/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.734133363s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-785615 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-783712 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-783712 --alsologtostderr -v=5: (1.726567168s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-785615 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-785615
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-785615: (1.297850399s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-785615
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-785615: (6.798019761s)
--- PASS: TestMountStart/serial/RestartStopped (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-785615 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-545164 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 11:36:17.525801  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-545164 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m19.593716104s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-545164 -- rollout status deployment/busybox: (2.987095488s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-ms2w5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-qsmnw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-ms2w5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-qsmnw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-ms2w5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-qsmnw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-ms2w5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-ms2w5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-qsmnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-545164 -- exec busybox-7b57f96db7-qsmnw -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-545164 -v=5 --alsologtostderr
E1101 11:38:25.235359  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-545164 -v=5 --alsologtostderr: (58.791880313s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-545164 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp testdata/cp-test.txt multinode-545164:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp multinode-545164:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile215742715/001/cp-test_multinode-545164.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp multinode-545164:/home/docker/cp-test.txt multinode-545164-m02:/home/docker/cp-test_multinode-545164_multinode-545164-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m02 "sudo cat /home/docker/cp-test_multinode-545164_multinode-545164-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp multinode-545164:/home/docker/cp-test.txt multinode-545164-m03:/home/docker/cp-test_multinode-545164_multinode-545164-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m03 "sudo cat /home/docker/cp-test_multinode-545164_multinode-545164-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp testdata/cp-test.txt multinode-545164-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp multinode-545164-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile215742715/001/cp-test_multinode-545164-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp multinode-545164-m02:/home/docker/cp-test.txt multinode-545164:/home/docker/cp-test_multinode-545164-m02_multinode-545164.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164 "sudo cat /home/docker/cp-test_multinode-545164-m02_multinode-545164.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp multinode-545164-m02:/home/docker/cp-test.txt multinode-545164-m03:/home/docker/cp-test_multinode-545164-m02_multinode-545164-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m03 "sudo cat /home/docker/cp-test_multinode-545164-m02_multinode-545164-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp testdata/cp-test.txt multinode-545164-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp multinode-545164-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile215742715/001/cp-test_multinode-545164-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp multinode-545164-m03:/home/docker/cp-test.txt multinode-545164:/home/docker/cp-test_multinode-545164-m03_multinode-545164.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164 "sudo cat /home/docker/cp-test_multinode-545164-m03_multinode-545164.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 cp multinode-545164-m03:/home/docker/cp-test.txt multinode-545164-m02:/home/docker/cp-test_multinode-545164-m03_multinode-545164-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 ssh -n multinode-545164-m02 "sudo cat /home/docker/cp-test_multinode-545164-m03_multinode-545164-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-545164 node stop m03: (1.343118823s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-545164 status: exit status 7 (546.626907ms)

                                                
                                                
-- stdout --
	multinode-545164
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-545164-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-545164-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-545164 status --alsologtostderr: exit status 7 (558.518098ms)

                                                
                                                
-- stdout --
	multinode-545164
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-545164-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-545164-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:39:09.700057  643377 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:39:09.700260  643377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:39:09.700292  643377 out.go:374] Setting ErrFile to fd 2...
	I1101 11:39:09.700312  643377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:39:09.700731  643377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:39:09.700982  643377 out.go:368] Setting JSON to false
	I1101 11:39:09.701048  643377 mustload.go:66] Loading cluster: multinode-545164
	I1101 11:39:09.701107  643377 notify.go:221] Checking for updates...
	I1101 11:39:09.701507  643377 config.go:182] Loaded profile config "multinode-545164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:39:09.701546  643377 status.go:174] checking status of multinode-545164 ...
	I1101 11:39:09.702156  643377 cli_runner.go:164] Run: docker container inspect multinode-545164 --format={{.State.Status}}
	I1101 11:39:09.723666  643377 status.go:371] multinode-545164 host status = "Running" (err=<nil>)
	I1101 11:39:09.723691  643377 host.go:66] Checking if "multinode-545164" exists ...
	I1101 11:39:09.724000  643377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-545164
	I1101 11:39:09.764804  643377 host.go:66] Checking if "multinode-545164" exists ...
	I1101 11:39:09.765113  643377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:39:09.765154  643377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-545164
	I1101 11:39:09.785111  643377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33630 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/multinode-545164/id_rsa Username:docker}
	I1101 11:39:09.887977  643377 ssh_runner.go:195] Run: systemctl --version
	I1101 11:39:09.894888  643377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:39:09.908161  643377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:39:09.967700  643377 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 11:39:09.958438846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:39:09.968236  643377 kubeconfig.go:125] found "multinode-545164" server: "https://192.168.67.2:8443"
	I1101 11:39:09.968275  643377 api_server.go:166] Checking apiserver status ...
	I1101 11:39:09.968322  643377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:39:09.979747  643377 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup
	I1101 11:39:09.988083  643377 api_server.go:182] apiserver freezer: "10:freezer:/docker/ef387f167adfef7aa4215f8468ac742f16bcfdd74b9a2da756c57d469d024ab5/crio/crio-e827e91270d0da99202079b40b51147ab34c433419a1208477e19a9a584ed04f"
	I1101 11:39:09.988150  643377 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ef387f167adfef7aa4215f8468ac742f16bcfdd74b9a2da756c57d469d024ab5/crio/crio-e827e91270d0da99202079b40b51147ab34c433419a1208477e19a9a584ed04f/freezer.state
	I1101 11:39:09.996073  643377 api_server.go:204] freezer state: "THAWED"
	I1101 11:39:09.996103  643377 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 11:39:10.005914  643377 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 11:39:10.005945  643377 status.go:463] multinode-545164 apiserver status = Running (err=<nil>)
	I1101 11:39:10.005957  643377 status.go:176] multinode-545164 status: &{Name:multinode-545164 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:39:10.005982  643377 status.go:174] checking status of multinode-545164-m02 ...
	I1101 11:39:10.006311  643377 cli_runner.go:164] Run: docker container inspect multinode-545164-m02 --format={{.State.Status}}
	I1101 11:39:10.028182  643377 status.go:371] multinode-545164-m02 host status = "Running" (err=<nil>)
	I1101 11:39:10.028212  643377 host.go:66] Checking if "multinode-545164-m02" exists ...
	I1101 11:39:10.028589  643377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-545164-m02
	I1101 11:39:10.047024  643377 host.go:66] Checking if "multinode-545164-m02" exists ...
	I1101 11:39:10.047350  643377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:39:10.047406  643377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-545164-m02
	I1101 11:39:10.064894  643377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33635 SSHKeyPath:/home/jenkins/minikube-integration/21830-532863/.minikube/machines/multinode-545164-m02/id_rsa Username:docker}
	I1101 11:39:10.168091  643377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:39:10.182598  643377 status.go:176] multinode-545164-m02 status: &{Name:multinode-545164-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:39:10.182632  643377 status.go:174] checking status of multinode-545164-m03 ...
	I1101 11:39:10.182951  643377 cli_runner.go:164] Run: docker container inspect multinode-545164-m03 --format={{.State.Status}}
	I1101 11:39:10.198852  643377 status.go:371] multinode-545164-m03 host status = "Stopped" (err=<nil>)
	I1101 11:39:10.198875  643377 status.go:384] host is not running, skipping remaining checks
	I1101 11:39:10.198882  643377 status.go:176] multinode-545164-m03 status: &{Name:multinode-545164-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-545164 node start m03 -v=5 --alsologtostderr: (7.652559903s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-545164
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-545164
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-545164: (25.095661384s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-545164 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-545164 --wait=true -v=5 --alsologtostderr: (57.145748065s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-545164
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-545164 node delete m03: (4.975532579s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-545164 stop: (24.116563612s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-545164 status: exit status 7 (85.732777ms)

                                                
                                                
-- stdout --
	multinode-545164
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-545164-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-545164 status --alsologtostderr: exit status 7 (87.957758ms)

                                                
                                                
-- stdout --
	multinode-545164
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-545164-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:41:10.938781  651171 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:41:10.938890  651171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:41:10.938900  651171 out.go:374] Setting ErrFile to fd 2...
	I1101 11:41:10.938906  651171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:41:10.939153  651171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:41:10.939336  651171 out.go:368] Setting JSON to false
	I1101 11:41:10.939378  651171 mustload.go:66] Loading cluster: multinode-545164
	I1101 11:41:10.939438  651171 notify.go:221] Checking for updates...
	I1101 11:41:10.940338  651171 config.go:182] Loaded profile config "multinode-545164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:41:10.940361  651171 status.go:174] checking status of multinode-545164 ...
	I1101 11:41:10.940951  651171 cli_runner.go:164] Run: docker container inspect multinode-545164 --format={{.State.Status}}
	I1101 11:41:10.959929  651171 status.go:371] multinode-545164 host status = "Stopped" (err=<nil>)
	I1101 11:41:10.959953  651171 status.go:384] host is not running, skipping remaining checks
	I1101 11:41:10.959960  651171 status.go:176] multinode-545164 status: &{Name:multinode-545164 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:41:10.959993  651171 status.go:174] checking status of multinode-545164-m02 ...
	I1101 11:41:10.960303  651171 cli_runner.go:164] Run: docker container inspect multinode-545164-m02 --format={{.State.Status}}
	I1101 11:41:10.979792  651171 status.go:371] multinode-545164-m02 host status = "Stopped" (err=<nil>)
	I1101 11:41:10.979819  651171 status.go:384] host is not running, skipping remaining checks
	I1101 11:41:10.979834  651171 status.go:176] multinode-545164-m02 status: &{Name:multinode-545164-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-545164 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 11:41:17.525286  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-545164 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.441539967s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-545164 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-545164
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-545164-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-545164-m02 --driver=docker  --container-runtime=crio: exit status 14 (120.388031ms)

                                                
                                                
-- stdout --
	* [multinode-545164-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-545164-m02' is duplicated with machine name 'multinode-545164-m02' in profile 'multinode-545164'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-545164-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-545164-m03 --driver=docker  --container-runtime=crio: (35.258853842s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-545164
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-545164: exit status 80 (350.314695ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-545164 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-545164-m03 already exists in multinode-545164-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-545164-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-545164-m03: (2.062814686s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.85s)

                                                
                                    
x
+
TestPreload (129.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-584206 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1101 11:43:25.235209  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-584206 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.7087098s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-584206 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-584206 image pull gcr.io/k8s-minikube/busybox: (2.243896163s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-584206
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-584206: (5.937424522s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-584206 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-584206 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.649087415s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-584206 image list
helpers_test.go:175: Cleaning up "test-preload-584206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-584206
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-584206: (2.486464836s)
--- PASS: TestPreload (129.26s)

                                                
                                    
x
+
TestScheduledStopUnix (107.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-521970 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-521970 --memory=3072 --driver=docker  --container-runtime=crio: (31.747595951s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-521970 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-521970 -n scheduled-stop-521970
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-521970 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 11:45:24.123995  534720 retry.go:31] will retry after 99.302µs: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.125375  534720 retry.go:31] will retry after 145.755µs: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.125770  534720 retry.go:31] will retry after 224.627µs: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.128213  534720 retry.go:31] will retry after 300.314µs: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.129345  534720 retry.go:31] will retry after 716.44µs: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.130467  534720 retry.go:31] will retry after 599.358µs: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.131586  534720 retry.go:31] will retry after 1.608162ms: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.133775  534720 retry.go:31] will retry after 1.036838ms: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.134900  534720 retry.go:31] will retry after 3.076838ms: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.139110  534720 retry.go:31] will retry after 5.172808ms: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.145305  534720 retry.go:31] will retry after 8.580623ms: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.154553  534720 retry.go:31] will retry after 8.86285ms: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.163784  534720 retry.go:31] will retry after 6.808652ms: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.171026  534720 retry.go:31] will retry after 24.857263ms: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
I1101 11:45:24.196269  534720 retry.go:31] will retry after 35.419469ms: open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/scheduled-stop-521970/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-521970 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-521970 -n scheduled-stop-521970
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-521970
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-521970 --schedule 15s
E1101 11:46:00.599507  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1101 11:46:17.525819  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:46:28.300501  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-521970
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-521970: exit status 7 (72.334487ms)

                                                
                                                
-- stdout --
	scheduled-stop-521970
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-521970 -n scheduled-stop-521970
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-521970 -n scheduled-stop-521970: exit status 7 (73.497621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-521970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-521970
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-521970: (4.141304826s)
--- PASS: TestScheduledStopUnix (107.47s)

                                                
                                    
x
+
TestInsufficientStorage (13.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-262440 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-262440 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.93415732s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"44899777-37bc-4739-ae13-a6059325cb8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-262440] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1633f797-bf77-4665-8fb2-2f1a038d646b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21830"}}
	{"specversion":"1.0","id":"c6bc6cf4-7e4e-42a3-91dc-97d453a9c071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"69086275-c292-4fd8-af83-c5d011fb30b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig"}}
	{"specversion":"1.0","id":"8e21e95f-7c06-4d0f-b66f-c6f57ed3e831","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube"}}
	{"specversion":"1.0","id":"e9732542-ad8d-4cd2-83ef-49e89c8571d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4fec976b-f17b-4ffb-a954-d3ad7947566d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"19c3ab44-9eb0-4be6-ad47-47fb60c577e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d638effa-7b5a-49c6-a6bc-7c0f01973241","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6c1074df-7d00-47fe-a0dd-247fc26536c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0632ee8a-1459-4ee5-8cac-24b660e11047","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"27e2db00-f551-45ed-99e5-b81e3c9f45b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-262440\" primary control-plane node in \"insufficient-storage-262440\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"47a8bcb4-5943-460e-a47f-0e20ed99cb07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a88f688-6a3b-451e-9129-ea130f2640fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"aebd3087-4f50-4d20-b41b-9c70113a947c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-262440 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-262440 --output=json --layout=cluster: exit status 7 (336.213856ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-262440","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-262440","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 11:46:50.585658  667275 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-262440" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-262440 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-262440 --output=json --layout=cluster: exit status 7 (340.074449ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-262440","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-262440","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 11:46:50.928689  667342 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-262440" does not appear in /home/jenkins/minikube-integration/21830-532863/kubeconfig
	E1101 11:46:50.938875  667342 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/insufficient-storage-262440/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-262440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-262440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-262440: (1.9859517s)
--- PASS: TestInsufficientStorage (13.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (49.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2591051032 start -p running-upgrade-496459 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2591051032 start -p running-upgrade-496459 --memory=3072 --vm-driver=docker  --container-runtime=crio: (30.954362458s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-496459 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-496459 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.148012575s)
helpers_test.go:175: Cleaning up "running-upgrade-496459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-496459
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-496459: (2.149387725s)
--- PASS: TestRunningBinaryUpgrade (49.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (342.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.132232501s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-396779
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-396779: (1.325633002s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-396779 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-396779 status --format={{.Host}}: exit status 7 (77.810355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m32.731215731s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-396779 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (122.888216ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-396779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-396779
	    minikube start -p kubernetes-upgrade-396779 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3967792 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-396779 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-396779 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.148019904s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-396779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-396779
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-396779: (2.425035392s)
--- PASS: TestKubernetesUpgrade (342.08s)

                                                
                                    
x
+
TestMissingContainerUpgrade (104.71s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3653685373 start -p missing-upgrade-598273 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3653685373 start -p missing-upgrade-598273 --memory=3072 --driver=docker  --container-runtime=crio: (1m0.419852883s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-598273
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-598273
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-598273 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 11:48:25.234955  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-598273 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.42759446s)
helpers_test.go:175: Cleaning up "missing-upgrade-598273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-598273
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-598273: (1.963605881s)
--- PASS: TestMissingContainerUpgrade (104.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656070 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-656070 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (101.780327ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-656070] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656070 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-656070 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.328428907s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-656070 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (117.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-656070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m54.54539384s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-656070 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-656070 status -o json: exit status 2 (441.322701ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-656070","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-656070
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-656070: (2.276544583s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (117.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-656070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.448039822s)
--- PASS: TestNoKubernetes/serial/Start (8.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-656070 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-656070 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.420541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-arm64 profile list: (15.651463031s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (15.757353613s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-656070
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-656070: (1.353953208s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-656070 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-656070 --driver=docker  --container-runtime=crio: (7.569870155s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-656070 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-656070 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.806082ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (54.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3962748430 start -p stopped-upgrade-043825 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3962748430 start -p stopped-upgrade-043825 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.294336983s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3962748430 -p stopped-upgrade-043825 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3962748430 -p stopped-upgrade-043825 stop: (1.371631069s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-043825 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 11:51:17.525306  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-043825 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.071413568s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (54.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-043825
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-043825: (1.128367435s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestPause/serial/Start (84.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-482771 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1101 11:53:25.234991  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-482771 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.375717148s)
--- PASS: TestPause/serial/Start (84.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-482771 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-482771 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.21009976s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-507511 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-507511 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (311.284386ms)

                                                
                                                
-- stdout --
	* [false-507511] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:54:25.051651  703379 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:54:25.051881  703379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:54:25.051908  703379 out.go:374] Setting ErrFile to fd 2...
	I1101 11:54:25.051929  703379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:54:25.052263  703379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-532863/.minikube/bin
	I1101 11:54:25.052729  703379 out.go:368] Setting JSON to false
	I1101 11:54:25.053673  703379 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13014,"bootTime":1761985051,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1101 11:54:25.053829  703379 start.go:143] virtualization:  
	I1101 11:54:25.057854  703379 out.go:179] * [false-507511] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:54:25.061479  703379 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:54:25.063410  703379 notify.go:221] Checking for updates...
	I1101 11:54:25.068219  703379 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:54:25.071314  703379 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-532863/kubeconfig
	I1101 11:54:25.076681  703379 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-532863/.minikube
	I1101 11:54:25.079817  703379 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:54:25.082885  703379 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:54:25.086400  703379 config.go:182] Loaded profile config "force-systemd-flag-643844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:54:25.086610  703379 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:54:25.135730  703379 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:54:25.135855  703379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:54:25.240595  703379 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2025-11-01 11:54:25.229520224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:54:25.240706  703379 docker.go:319] overlay module found
	I1101 11:54:25.245426  703379 out.go:179] * Using the docker driver based on user configuration
	I1101 11:54:25.249474  703379 start.go:309] selected driver: docker
	I1101 11:54:25.249494  703379 start.go:930] validating driver "docker" against <nil>
	I1101 11:54:25.249507  703379 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:54:25.253615  703379 out.go:203] 
	W1101 11:54:25.256277  703379 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 11:54:25.259364  703379 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-507511 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-507511" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-507511

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507511"

                                                
                                                
----------------------- debugLogs end: false-507511 [took: 4.934732823s] --------------------------------
helpers_test.go:175: Cleaning up "false-507511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-507511
--- PASS: TestNetworkPlugins/group/false (5.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1101 11:56:17.526243  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m0.122773571s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-952358 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a2cae1c5-c388-493d-93c1-2ea919b16ea1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a2cae1c5-c388-493d-93c1-2ea919b16ea1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003206625s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-952358 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-952358 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-952358 --alsologtostderr -v=3: (12.054128769s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-952358 -n old-k8s-version-952358
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-952358 -n old-k8s-version-952358: exit status 7 (73.312592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-952358 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-952358 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.585128889s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-952358 -n old-k8s-version-952358
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nhfb8" [b6cd70f9-cc0d-4ddf-9438-3f717d09de5d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002999567s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nhfb8" [b6cd70f9-cc0d-4ddf-9438-3f717d09de5d] Running
E1101 11:58:25.235415  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004808855s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-952358 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-952358 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.861794995s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.275440455s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-198717 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [00673c7a-bc5a-4041-b86d-7c60acfabc54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [00673c7a-bc5a-4041-b86d-7c60acfabc54] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004081015s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-198717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-198717 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-198717 --alsologtostderr -v=3: (12.034283016s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-198717 -n no-preload-198717
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-198717 -n no-preload-198717: exit status 7 (65.376729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-198717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-198717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.787139775s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-198717 -n no-preload-198717
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-816860 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e41d4e23-ef87-4bf1-a0d7-6261913ab0ec] Pending
helpers_test.go:352: "busybox" [e41d4e23-ef87-4bf1-a0d7-6261913ab0ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e41d4e23-ef87-4bf1-a0d7-6261913ab0ec] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003462318s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-816860 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-816860 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-816860 --alsologtostderr -v=3: (12.602803625s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-816860 -n embed-certs-816860
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-816860 -n embed-certs-816860: exit status 7 (74.137909ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-816860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-816860 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.620956998s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-816860 -n embed-certs-816860
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n6g7x" [9e7dd2ec-dda5-4696-8a6d-235d16273511] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003563699s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n6g7x" [9e7dd2ec-dda5-4696-8a6d-235d16273511] Running
E1101 12:01:17.526031  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003388421s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-198717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-198717 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.06764322s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2zdqk" [72e9dd90-be25-4d16-9784-546c4978c4e1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004339832s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2zdqk" [72e9dd90-be25-4d16-9784-546c4978c4e1] Running
E1101 12:01:58.798534  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:01:58.804839  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:01:58.816151  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:01:58.837493  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:01:58.878824  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:01:58.960090  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:01:59.121500  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:01:59.442824  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:02:00.084134  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004874762s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-816860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-816860 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 12:02:19.291410  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:02:39.772677  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:02:40.601270  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/addons-780397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (38.327855196s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-915456 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-915456 --alsologtostderr -v=3: (1.404003484s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-915456 -n newest-cni-915456
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-915456 -n newest-cni-915456: exit status 7 (72.657251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-915456 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-915456 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (17.657469685s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-915456 -n newest-cni-915456
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-772362 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3b3c8cec-2ef2-493b-987d-c2ebda1abcd9] Pending
helpers_test.go:352: "busybox" [3b3c8cec-2ef2-493b-987d-c2ebda1abcd9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3b3c8cec-2ef2-493b-987d-c2ebda1abcd9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003629094s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-772362 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-772362 --alsologtostderr -v=3
E1101 12:03:08.301834  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-772362 --alsologtostderr -v=3: (12.359002737s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-915456 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362: exit status 7 (82.541416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-772362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 12:03:20.734821  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-772362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.538761834s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-772362 -n default-k8s-diff-port-772362
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1101 12:03:25.235093  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.782433844s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v9lb6" [e4488a24-15da-4027-9207-87a2d638e13e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003401989s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v9lb6" [e4488a24-15da-4027-9207-87a2d638e13e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003412133s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-772362 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-772362 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1101 12:04:42.656425  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (55.917673382s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-507511 "pgrep -a kubelet"
I1101 12:04:49.131099  534720 config.go:182] Loaded profile config "auto-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-507511 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dvccz" [7961b041-472f-4522-8ab4-6cddaa7e6826] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 12:04:54.654569  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:04:54.660931  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:04:54.672209  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:04:54.693563  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:04:54.734976  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:04:54.816387  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:04:54.978613  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:04:55.300207  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-dvccz" [7961b041-472f-4522-8ab4-6cddaa7e6826] Running
E1101 12:04:55.941638  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:04:57.223003  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:04:59.785167  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004172919s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-507511 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.750469935s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-25pd9" [330fa921-a200-48ff-9f29-23608f6f314f] Running
E1101 12:05:35.629423  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/no-preload-198717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004141395s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-507511 "pgrep -a kubelet"
I1101 12:05:40.211417  534720 config.go:182] Loaded profile config "kindnet-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-507511 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fk558" [c31ea196-eee5-44fa-a4ae-4e86b08c057b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fk558" [c31ea196-eee5-44fa-a4ae-4e86b08c057b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00394272s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-507511 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m7.800871038s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-pbw75" [5644240b-9168-4f87-8237-96096beb0d94] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005825375s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-507511 "pgrep -a kubelet"
I1101 12:06:36.019045  534720 config.go:182] Loaded profile config "calico-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-507511 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8lr7s" [deebdb8b-9ee8-47ce-b86d-0108f5f30b9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8lr7s" [deebdb8b-9ee8-47ce-b86d-0108f5f30b9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.002806785s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-507511 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1101 12:07:26.498019  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/old-k8s-version-952358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m15.788635995s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-507511 "pgrep -a kubelet"
I1101 12:07:27.108416  534720 config.go:182] Loaded profile config "custom-flannel-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-507511 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8jmgk" [48e84823-5cf8-4210-8ada-51d04ff8632f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8jmgk" [48e84823-5cf8-4210-8ada-51d04ff8632f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004494724s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-507511 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1101 12:08:06.260531  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:08:16.501898  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:08:25.235371  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/functional-203469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.034224613s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-507511 "pgrep -a kubelet"
I1101 12:08:30.772072  534720 config.go:182] Loaded profile config "enable-default-cni-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-507511 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x7q26" [4f1261af-b129-4718-9db9-eeaf92cb6478] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x7q26" [4f1261af-b129-4718-9db9-eeaf92cb6478] Running
E1101 12:08:36.983843  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004780943s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-507511 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-qbdbq" [d2e1e170-bf63-4419-911f-b994e52e78ce] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003267232s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-507511 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m21.688214881s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-507511 "pgrep -a kubelet"
I1101 12:09:09.158806  534720 config.go:182] Loaded profile config "flannel-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-507511 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8l2kv" [0caa85de-fa97-45a4-8d06-e2b0db03aebf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8l2kv" [0caa85de-fa97-45a4-8d06-e2b0db03aebf] Running
E1101 12:09:17.945341  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/default-k8s-diff-port-772362/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004897228s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-507511 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-507511 "pgrep -a kubelet"
I1101 12:10:25.690769  534720 config.go:182] Loaded profile config "bridge-507511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-507511 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4ckfs" [2b9ba9b6-d118-45d8-adf5-f10c4c0e5602] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4ckfs" [2b9ba9b6-d118-45d8-adf5-f10c4c0e5602] Running
E1101 12:10:30.419244  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/auto-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:33.826974  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kindnet-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:33.833477  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kindnet-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:33.844875  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kindnet-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:33.866287  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kindnet-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:33.907675  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kindnet-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:33.989397  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kindnet-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:34.150896  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kindnet-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 12:10:34.472583  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kindnet-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.010248084s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-507511 exec deployment/netcat -- nslookup kubernetes.default
E1101 12:10:35.114640  534720 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-532863/.minikube/profiles/kindnet-507511/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-507511 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-524809 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-524809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-524809
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-783522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-783522
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-507511 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-507511" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-507511

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507511"

                                                
                                                
----------------------- debugLogs end: kubenet-507511 [took: 5.035809948s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-507511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-507511
--- SKIP: TestNetworkPlugins/group/kubenet (5.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-507511 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-507511" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-507511

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-507511" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507511"

                                                
                                                
----------------------- debugLogs end: cilium-507511 [took: 5.023434243s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-507511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-507511
--- SKIP: TestNetworkPlugins/group/cilium (5.19s)

                                                
                                    
Copied to clipboard